00:00:00.001 Started by upstream project "autotest-per-patch" build number 121270 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.095 The recommended git tool is: git 00:00:00.095 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.161 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.234 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.234 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.761 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.775 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.788 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:08.788 > git config core.sparsecheckout # timeout=10 00:00:08.800 > git read-tree -mu HEAD # timeout=10 00:00:08.816 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:08.839 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:08.839 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:08.959 [Pipeline] Start of Pipeline 00:00:08.974 [Pipeline] library 00:00:08.976 Loading library shm_lib@master 00:00:08.976 Library shm_lib@master is cached. Copying from home. 00:00:08.992 [Pipeline] node 00:00:09.003 Running on WFP29 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:09.005 [Pipeline] { 00:00:09.013 [Pipeline] catchError 00:00:09.014 [Pipeline] { 00:00:09.025 [Pipeline] wrap 00:00:09.033 [Pipeline] { 00:00:09.038 [Pipeline] stage 00:00:09.040 [Pipeline] { (Prologue) 00:00:09.218 [Pipeline] sh 00:00:09.547 + logger -p user.info -t JENKINS-CI 00:00:09.592 [Pipeline] echo 00:00:09.594 Node: WFP29 00:00:09.617 [Pipeline] sh 00:00:09.941 [Pipeline] setCustomBuildProperty 00:00:09.955 [Pipeline] echo 00:00:09.956 Cleanup processes 00:00:09.962 [Pipeline] sh 00:00:10.249 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:10.249 222883 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:10.263 [Pipeline] sh 00:00:10.546 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:10.546 ++ grep -v 'sudo pgrep' 00:00:10.546 ++ awk '{print $1}' 00:00:10.546 + sudo kill -9 00:00:10.546 + true 00:00:10.562 [Pipeline] cleanWs 00:00:10.571 [WS-CLEANUP] Deleting project workspace... 00:00:10.571 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.577 [WS-CLEANUP] done 00:00:10.582 [Pipeline] setCustomBuildProperty 00:00:10.598 [Pipeline] sh 00:00:10.882 + sudo git config --global --replace-all safe.directory '*' 00:00:10.959 [Pipeline] nodesByLabel 00:00:10.961 Found a total of 1 nodes with the 'sorcerer' label 00:00:10.973 [Pipeline] httpRequest 00:00:10.978 HttpMethod: GET 00:00:10.979 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:10.990 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:11.006 Response Code: HTTP/1.1 200 OK 00:00:11.007 Success: Status code 200 is in the accepted range: 200,404 00:00:11.007 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:18.420 [Pipeline] sh 00:00:18.704 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:18.721 [Pipeline] httpRequest 00:00:18.725 HttpMethod: GET 00:00:18.726 URL: http://10.211.164.96/packages/spdk_bba4d07b01fce8796cbb4e4cca76c0ab77f2a4fb.tar.gz 00:00:18.727 Sending request to url: http://10.211.164.96/packages/spdk_bba4d07b01fce8796cbb4e4cca76c0ab77f2a4fb.tar.gz 00:00:18.743 Response Code: HTTP/1.1 200 OK 00:00:18.743 Success: Status code 200 is in the accepted range: 200,404 00:00:18.743 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_bba4d07b01fce8796cbb4e4cca76c0ab77f2a4fb.tar.gz 00:01:03.346 [Pipeline] sh 00:01:03.635 + tar --no-same-owner -xf spdk_bba4d07b01fce8796cbb4e4cca76c0ab77f2a4fb.tar.gz 00:01:06.186 [Pipeline] sh 00:01:06.471 + git -C spdk log --oneline -n5 00:01:06.471 bba4d07b0 nvmf/tcp: register and use trace owners 00:01:06.471 6d865357b nvmf/tcp: add nvmf_qpair_set_ctrlr helper function 00:01:06.471 758a0f8d6 app/trace: emit owner descriptions 00:01:06.471 00c779c77 trace: rename trace_event's poller_id to owner_id 00:01:06.471 6a136826c trace: add concept of "owner" to trace files 00:01:06.483 [Pipeline] } 00:01:06.499 [Pipeline] // stage 00:01:06.507 [Pipeline] stage 00:01:06.509 [Pipeline] { (Prepare) 00:01:06.527 [Pipeline] writeFile 00:01:06.542 [Pipeline] sh 00:01:06.828 + logger -p user.info -t JENKINS-CI 00:01:06.840 [Pipeline] sh 00:01:07.124 + logger -p user.info -t JENKINS-CI 00:01:07.136 [Pipeline] sh 00:01:07.423 + cat autorun-spdk.conf 00:01:07.423 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.423 SPDK_TEST_NVMF=1 00:01:07.423 SPDK_TEST_NVME_CLI=1 00:01:07.423 SPDK_TEST_NVMF_NICS=mlx5 00:01:07.423 SPDK_RUN_UBSAN=1 00:01:07.423 NET_TYPE=phy 00:01:07.431 RUN_NIGHTLY=0 00:01:07.435 [Pipeline] readFile 00:01:07.460 [Pipeline] withEnv 00:01:07.462 [Pipeline] { 00:01:07.476 [Pipeline] sh 00:01:07.762 + set -ex 00:01:07.762 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:07.762 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:07.762 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.762 ++ SPDK_TEST_NVMF=1 00:01:07.762 ++ SPDK_TEST_NVME_CLI=1 00:01:07.762 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:07.762 ++ SPDK_RUN_UBSAN=1 00:01:07.762 ++ NET_TYPE=phy 00:01:07.762 ++ RUN_NIGHTLY=0 00:01:07.762 + case $SPDK_TEST_NVMF_NICS in 00:01:07.762 + DRIVERS=mlx5_ib 00:01:07.762 + [[ -n mlx5_ib ]] 00:01:07.762 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:07.762 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:10.303 rmmod: ERROR: Module irdma is not currently loaded 00:01:10.303 rmmod: ERROR: Module i40iw is not currently loaded 00:01:10.303 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:10.303 + true 00:01:10.303 + for D in $DRIVERS 00:01:10.303 + sudo modprobe mlx5_ib 00:01:10.563 + exit 0 00:01:10.573 [Pipeline] } 00:01:10.590 [Pipeline] // withEnv 00:01:10.596 [Pipeline] } 00:01:10.610 [Pipeline] // stage 00:01:10.618 [Pipeline] catchError 00:01:10.619 [Pipeline] { 00:01:10.631 [Pipeline] timeout 00:01:10.632 Timeout set to expire in 40 min 00:01:10.634 [Pipeline] { 00:01:10.649 [Pipeline] stage 00:01:10.651 [Pipeline] { (Tests) 00:01:10.666 [Pipeline] sh 00:01:10.952 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:10.952 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:10.952 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:10.952 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:10.952 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:10.952 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:10.952 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:10.952 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:10.952 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:10.952 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:10.952 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:10.952 + source /etc/os-release 00:01:10.952 ++ NAME='Fedora Linux' 00:01:10.952 ++ VERSION='38 (Cloud Edition)' 00:01:10.952 ++ ID=fedora 00:01:10.952 ++ VERSION_ID=38 00:01:10.952 ++ VERSION_CODENAME= 00:01:10.952 ++ PLATFORM_ID=platform:f38 00:01:10.952 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:10.952 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:10.952 ++ LOGO=fedora-logo-icon 00:01:10.952 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:10.952 ++ HOME_URL=https://fedoraproject.org/ 00:01:10.952 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:10.952 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:10.952 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:10.952 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:10.952 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:10.952 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:10.952 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:10.952 ++ SUPPORT_END=2024-05-14 00:01:10.952 ++ VARIANT='Cloud Edition' 00:01:10.952 ++ VARIANT_ID=cloud 00:01:10.952 + uname -a 00:01:10.952 Linux spdk-wfp-29 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:10.952 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:13.496 Hugepages 00:01:13.496 node hugesize free / total 00:01:13.496 node0 1048576kB 0 / 0 00:01:13.496 node0 2048kB 0 / 0 00:01:13.496 node1 1048576kB 0 / 0 00:01:13.496 node1 2048kB 0 / 0 00:01:13.496 00:01:13.496 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:13.496 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:13.496 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:13.496 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:13.496 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:13.496 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:13.496 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:13.496 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:13.496 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:13.757 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:13.757 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:13.757 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:13.757 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:13.757 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:13.757 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:13.757 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:13.757 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:13.757 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:13.757 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:01:14.017 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:01:14.017 + rm -f /tmp/spdk-ld-path 00:01:14.017 + source autorun-spdk.conf 00:01:14.017 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.017 ++ SPDK_TEST_NVMF=1 00:01:14.017 ++ SPDK_TEST_NVME_CLI=1 00:01:14.017 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:14.017 ++ SPDK_RUN_UBSAN=1 00:01:14.017 ++ NET_TYPE=phy 00:01:14.017 ++ RUN_NIGHTLY=0 00:01:14.017 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:14.017 + [[ -n '' ]] 00:01:14.017 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:14.017 + for M in /var/spdk/build-*-manifest.txt 00:01:14.017 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:14.017 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:14.017 + for M in /var/spdk/build-*-manifest.txt 00:01:14.017 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:14.018 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:14.018 ++ uname 00:01:14.018 + [[ Linux == \L\i\n\u\x ]] 00:01:14.018 + sudo dmesg -T 00:01:14.018 + sudo dmesg --clear 00:01:14.018 + dmesg_pid=224363 00:01:14.018 + [[ Fedora Linux == FreeBSD ]] 00:01:14.018 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.018 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:14.018 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:14.018 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.018 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:14.018 + [[ -x /usr/src/fio-static/fio ]] 00:01:14.018 + export FIO_BIN=/usr/src/fio-static/fio 00:01:14.018 + FIO_BIN=/usr/src/fio-static/fio 00:01:14.018 + sudo dmesg -Tw 00:01:14.018 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:14.018 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:14.018 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:14.018 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.018 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:14.018 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:14.018 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.018 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:14.018 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:14.018 Test configuration: 00:01:14.018 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.018 SPDK_TEST_NVMF=1 00:01:14.018 SPDK_TEST_NVME_CLI=1 00:01:14.018 SPDK_TEST_NVMF_NICS=mlx5 00:01:14.018 SPDK_RUN_UBSAN=1 00:01:14.018 NET_TYPE=phy 00:01:14.018 RUN_NIGHTLY=0 16:12:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:14.018 16:12:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:14.018 16:12:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:14.018 16:12:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:14.018 16:12:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.018 16:12:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.018 16:12:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.018 16:12:23 -- paths/export.sh@5 -- $ export PATH 00:01:14.018 16:12:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:14.018 16:12:23 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:14.018 16:12:23 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:14.018 16:12:23 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714140743.XXXXXX 00:01:14.018 16:12:23 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714140743.iOL8WD 00:01:14.018 16:12:23 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:14.018 16:12:23 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:14.018 16:12:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:14.018 16:12:23 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:14.018 16:12:23 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:14.018 16:12:23 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:14.018 16:12:23 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:14.018 16:12:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.278 16:12:23 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:14.278 16:12:23 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:14.278 16:12:23 -- pm/common@17 -- $ local monitor 00:01:14.278 16:12:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.278 16:12:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=224397 00:01:14.278 16:12:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.278 16:12:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=224399 00:01:14.278 16:12:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.278 16:12:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=224400 00:01:14.278 16:12:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:14.278 16:12:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=224402 00:01:14.278 16:12:23 -- pm/common@26 -- $ sleep 1 00:01:14.278 16:12:23 -- pm/common@21 -- $ date +%s 00:01:14.278 16:12:23 -- pm/common@21 -- $ date +%s 00:01:14.278 16:12:23 -- pm/common@21 -- $ date +%s 00:01:14.278 16:12:23 -- pm/common@21 -- $ date +%s 00:01:14.278 16:12:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714140743 00:01:14.278 16:12:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714140743 00:01:14.278 16:12:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714140743 00:01:14.278 16:12:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714140743 00:01:14.278 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714140743_collect-bmc-pm.bmc.pm.log 00:01:14.278 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714140743_collect-cpu-temp.pm.log 00:01:14.278 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714140743_collect-vmstat.pm.log 00:01:14.278 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714140743_collect-cpu-load.pm.log 00:01:15.217 16:12:24 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:15.217 16:12:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.217 16:12:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.217 16:12:24 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:15.217 16:12:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.217 Fri Apr 26 02:12:24 PM UTC 2024 00:01:15.217 16:12:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.217 v24.05-pre-444-gbba4d07b0 00:01:15.217 16:12:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.217 16:12:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.217 16:12:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.217 16:12:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:15.217 16:12:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:15.217 16:12:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.217 ************************************ 00:01:15.217 START TEST ubsan 00:01:15.217 ************************************ 00:01:15.217 16:12:24 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:15.217 using ubsan 00:01:15.217 00:01:15.217 real 0m0.000s 00:01:15.217 user 0m0.000s 00:01:15.217 sys 0m0.000s 00:01:15.217 16:12:24 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:15.217 16:12:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.217 ************************************ 00:01:15.217 END TEST ubsan 00:01:15.217 ************************************ 00:01:15.477 16:12:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.477 16:12:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.477 16:12:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.477 16:12:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.477 16:12:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.477 16:12:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.477 16:12:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.477 16:12:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.477 16:12:24 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:15.477 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:15.477 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:15.737 Using 'verbs' RDMA provider 00:01:28.905 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:41.153 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:41.413 Creating mk/config.mk...done. 00:01:41.413 Creating mk/cc.flags.mk...done. 00:01:41.413 Type 'make' to build. 00:01:41.413 16:12:50 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:41.413 16:12:50 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:41.413 16:12:50 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:41.413 16:12:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.673 ************************************ 00:01:41.673 START TEST make 00:01:41.673 ************************************ 00:01:41.673 16:12:50 -- common/autotest_common.sh@1111 -- $ make -j72 00:01:41.933 make[1]: Nothing to be done for 'all'. 00:01:51.924 The Meson build system 00:01:51.924 Version: 1.3.1 00:01:51.924 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:51.924 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:51.924 Build type: native build 00:01:51.924 Program cat found: YES (/usr/bin/cat) 00:01:51.924 Project name: DPDK 00:01:51.924 Project version: 23.11.0 00:01:51.924 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:51.924 C linker for the host machine: cc ld.bfd 2.39-16 00:01:51.924 Host machine cpu family: x86_64 00:01:51.924 Host machine cpu: x86_64 00:01:51.924 Message: ## Building in Developer Mode ## 00:01:51.924 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.924 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:51.924 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.924 Program python3 found: YES (/usr/bin/python3) 00:01:51.924 Program cat found: YES (/usr/bin/cat) 00:01:51.924 Compiler for C supports arguments -march=native: YES 00:01:51.924 Checking for size of "void *" : 8 00:01:51.924 Checking for size of "void *" : 8 (cached) 00:01:51.924 Library m found: YES 00:01:51.924 Library numa found: YES 00:01:51.924 Has header "numaif.h" : YES 00:01:51.924 Library fdt found: NO 00:01:51.924 Library execinfo found: NO 00:01:51.924 Has header "execinfo.h" : YES 00:01:51.924 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:51.924 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.924 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.924 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.924 Run-time dependency openssl found: YES 3.0.9 00:01:51.924 Run-time dependency libpcap found: YES 1.10.4 00:01:51.924 Has header "pcap.h" with dependency libpcap: YES 00:01:51.924 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.924 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.924 Compiler for C supports arguments -Wformat: YES 00:01:51.924 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.924 Compiler for C supports arguments -Wformat-security: NO 00:01:51.924 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.924 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.924 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.924 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.924 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.924 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.924 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.924 Compiler for C supports arguments -Wundef: YES 00:01:51.924 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.924 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.924 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.924 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.924 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.924 Program objdump found: YES (/usr/bin/objdump) 00:01:51.924 Compiler for C supports arguments -mavx512f: YES 00:01:51.924 Checking if "AVX512 checking" compiles: YES 00:01:51.924 Fetching value of define "__SSE4_2__" : 1 00:01:51.924 Fetching value of define "__AES__" : 1 00:01:51.924 Fetching value of define "__AVX__" : 1 00:01:51.924 Fetching value of define "__AVX2__" : 1 00:01:51.924 Fetching value of define "__AVX512BW__" : 1 00:01:51.924 Fetching value of define "__AVX512CD__" : 1 00:01:51.924 Fetching value of define "__AVX512DQ__" : 1 00:01:51.924 Fetching value of define "__AVX512F__" : 1 00:01:51.924 Fetching value of define "__AVX512VL__" : 1 00:01:51.924 Fetching value of define "__PCLMUL__" : 1 00:01:51.924 Fetching value of define "__RDRND__" : 1 00:01:51.924 Fetching value of define "__RDSEED__" : 1 00:01:51.924 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:51.924 Fetching value of define "__znver1__" : (undefined) 00:01:51.924 Fetching value of define "__znver2__" : (undefined) 00:01:51.924 Fetching value of define "__znver3__" : (undefined) 00:01:51.924 Fetching value of define "__znver4__" : (undefined) 00:01:51.924 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.924 Message: lib/log: Defining dependency "log" 00:01:51.924 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.924 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.924 Checking for function "getentropy" : NO 00:01:51.924 Message: lib/eal: Defining dependency "eal" 00:01:51.924 Message: lib/ring: Defining dependency "ring" 00:01:51.924 Message: lib/rcu: Defining dependency "rcu" 00:01:51.924 Message: lib/mempool: Defining dependency "mempool" 00:01:51.924 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.924 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.924 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.924 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.924 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.924 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.924 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:51.924 Compiler for C supports arguments -mpclmul: YES 00:01:51.924 Compiler for C supports arguments -maes: YES 00:01:51.924 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.924 Compiler for C supports arguments -mavx512bw: YES 00:01:51.924 Compiler for C supports arguments -mavx512dq: YES 00:01:51.924 Compiler for C supports arguments -mavx512vl: YES 00:01:51.924 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.924 Compiler for C supports arguments -mavx2: YES 00:01:51.924 Compiler for C supports arguments -mavx: YES 00:01:51.924 Message: lib/net: Defining dependency "net" 00:01:51.924 Message: lib/meter: Defining dependency "meter" 00:01:51.924 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.924 Message: lib/pci: Defining dependency "pci" 00:01:51.924 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.924 Message: lib/hash: Defining dependency "hash" 00:01:51.924 Message: lib/timer: Defining dependency "timer" 00:01:51.924 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.924 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.924 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.924 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.924 Message: lib/power: Defining dependency "power" 00:01:51.924 Message: lib/reorder: Defining dependency "reorder" 00:01:51.924 Message: lib/security: Defining dependency "security" 00:01:51.924 Has header "linux/userfaultfd.h" : YES 00:01:51.924 Has header "linux/vduse.h" : YES 00:01:51.924 Message: lib/vhost: Defining dependency "vhost" 00:01:51.924 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.924 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.924 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.924 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.924 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:51.924 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:51.924 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:51.924 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:51.924 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:51.924 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:51.924 Program doxygen found: YES (/usr/bin/doxygen) 00:01:51.924 Configuring doxy-api-html.conf using configuration 00:01:51.924 Configuring doxy-api-man.conf using configuration 00:01:51.924 Program mandb found: YES (/usr/bin/mandb) 00:01:51.924 Program sphinx-build found: NO 00:01:51.924 Configuring rte_build_config.h using configuration 00:01:51.924 Message: 00:01:51.924 ================= 00:01:51.924 Applications Enabled 00:01:51.924 ================= 00:01:51.924 00:01:51.924 apps: 00:01:51.924 00:01:51.924 00:01:51.924 Message: 00:01:51.924 ================= 00:01:51.924 Libraries Enabled 00:01:51.924 ================= 00:01:51.924 00:01:51.924 libs: 00:01:51.924 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:51.924 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:51.924 cryptodev, dmadev, power, reorder, security, vhost, 00:01:51.924 00:01:51.924 Message: 00:01:51.924 =============== 00:01:51.924 Drivers Enabled 00:01:51.924 =============== 00:01:51.924 00:01:51.924 common: 00:01:51.924 00:01:51.924 bus: 00:01:51.924 pci, vdev, 00:01:51.924 mempool: 00:01:51.924 ring, 00:01:51.924 dma: 00:01:51.924 00:01:51.924 net: 00:01:51.924 00:01:51.924 crypto: 00:01:51.924 00:01:51.924 compress: 00:01:51.924 00:01:51.924 vdpa: 00:01:51.924 00:01:51.924 00:01:51.924 Message: 00:01:51.924 ================= 00:01:51.924 Content Skipped 00:01:51.924 ================= 00:01:51.924 00:01:51.924 apps: 00:01:51.924 dumpcap: explicitly disabled via build config 00:01:51.924 graph: explicitly disabled via build config 00:01:51.924 pdump: explicitly disabled via build config 00:01:51.924 proc-info: explicitly disabled via build config 00:01:51.924 test-acl: explicitly disabled via build config 00:01:51.924 test-bbdev: explicitly disabled via build config 00:01:51.924 test-cmdline: explicitly disabled via build config 00:01:51.924 test-compress-perf: explicitly disabled via build config 00:01:51.924 test-crypto-perf: explicitly disabled via build config 00:01:51.924 test-dma-perf: explicitly disabled via build config 00:01:51.924 test-eventdev: explicitly disabled via build config 00:01:51.924 test-fib: explicitly disabled via build config 00:01:51.924 test-flow-perf: explicitly disabled via build config 00:01:51.925 test-gpudev: explicitly disabled via build config 00:01:51.925 test-mldev: explicitly disabled via build config 00:01:51.925 test-pipeline: explicitly disabled via build config 00:01:51.925 test-pmd: explicitly disabled via build config 00:01:51.925 test-regex: explicitly disabled via build config 00:01:51.925 test-sad: explicitly disabled via build config 00:01:51.925 test-security-perf: explicitly disabled via build config 00:01:51.925 00:01:51.925 libs: 00:01:51.925 metrics: explicitly disabled via build config 00:01:51.925 acl: explicitly disabled via build config 00:01:51.925 bbdev: explicitly disabled via build config 00:01:51.925 bitratestats: explicitly disabled via build config 00:01:51.925 bpf: explicitly disabled via build config 00:01:51.925 cfgfile: explicitly disabled via build config 00:01:51.925 distributor: explicitly disabled via build config 00:01:51.925 efd: explicitly disabled via build config 00:01:51.925 eventdev: explicitly disabled via build config 00:01:51.925 dispatcher: explicitly disabled via build config 00:01:51.925 gpudev: explicitly disabled via build config 00:01:51.925 gro: explicitly disabled via build config 00:01:51.925 gso: explicitly disabled via build config 00:01:51.925 ip_frag: explicitly disabled via build config 00:01:51.925 jobstats: explicitly disabled via build config 00:01:51.925 latencystats: explicitly disabled via build config 00:01:51.925 lpm: explicitly disabled via build config 00:01:51.925 member: explicitly disabled via build config 00:01:51.925 pcapng: explicitly disabled via build config 00:01:51.925 rawdev: explicitly disabled via build config 00:01:51.925 regexdev: explicitly disabled via build config 00:01:51.925 mldev: explicitly disabled via build config 00:01:51.925 rib: explicitly disabled via build config 00:01:51.925 sched: explicitly disabled via build config 00:01:51.925 stack: explicitly disabled via build config 00:01:51.925 ipsec: explicitly disabled via build config 00:01:51.925 pdcp: explicitly disabled via build config 00:01:51.925 fib: explicitly disabled via build config 00:01:51.925 port: explicitly disabled via build config 00:01:51.925 pdump: explicitly disabled via build config 00:01:51.925 table: explicitly disabled via build config 00:01:51.925 pipeline: explicitly disabled via build config 00:01:51.925 graph: explicitly disabled via build config 00:01:51.925 node: explicitly disabled via build config 00:01:51.925 00:01:51.925 drivers: 00:01:51.925 common/cpt: not in enabled drivers build config 00:01:51.925 common/dpaax: not in enabled drivers build config 00:01:51.925 common/iavf: not in enabled drivers build config 00:01:51.925 common/idpf: not in enabled drivers build config 00:01:51.925 common/mvep: not in enabled drivers build config 00:01:51.925 common/octeontx: not in enabled drivers build config 00:01:51.925 bus/auxiliary: not in enabled drivers build config 00:01:51.925 bus/cdx: not in enabled drivers build config 00:01:51.925 bus/dpaa: not in enabled drivers build config 00:01:51.925 bus/fslmc: not in enabled drivers build config 00:01:51.925 bus/ifpga: not in enabled drivers build config 00:01:51.925 bus/platform: not in enabled drivers build config 00:01:51.925 bus/vmbus: not in enabled drivers build config 00:01:51.925 common/cnxk: not in enabled drivers build config 00:01:51.925 common/mlx5: not in enabled drivers build config 00:01:51.925 common/nfp: not in enabled drivers build config 00:01:51.925 common/qat: not in enabled drivers build config 00:01:51.925 common/sfc_efx: not in enabled drivers build config 00:01:51.925 mempool/bucket: not in enabled drivers build config 00:01:51.925 mempool/cnxk: not in enabled drivers build config 00:01:51.925 mempool/dpaa: not in enabled drivers build config 00:01:51.925 mempool/dpaa2: not in enabled drivers build config 00:01:51.925 mempool/octeontx: not in enabled drivers build config 00:01:51.925 mempool/stack: not in enabled drivers build config 00:01:51.925 dma/cnxk: not in enabled drivers build config 00:01:51.925 dma/dpaa: not in enabled drivers build config 00:01:51.925 dma/dpaa2: not in enabled drivers build config 00:01:51.925 dma/hisilicon: not in enabled drivers build config 00:01:51.925 dma/idxd: not in enabled drivers build config 00:01:51.925 dma/ioat: not in enabled drivers build config 00:01:51.925 dma/skeleton: not in enabled drivers build config 00:01:51.925 net/af_packet: not in enabled drivers build config 00:01:51.925 net/af_xdp: not in enabled drivers build config 00:01:51.925 net/ark: not in enabled drivers build config 00:01:51.925 net/atlantic: not in enabled drivers build config 00:01:51.925 net/avp: not in enabled drivers build config 00:01:51.925 net/axgbe: not in enabled drivers build config 00:01:51.925 net/bnx2x: not in enabled drivers build config 00:01:51.925 net/bnxt: not in enabled drivers build config 00:01:51.925 net/bonding: not in enabled drivers build config 00:01:51.925 net/cnxk: not in enabled drivers build config 00:01:51.925 net/cpfl: not in enabled drivers build config 00:01:51.925 net/cxgbe: not in enabled drivers build config 00:01:51.925 net/dpaa: not in enabled drivers build config 00:01:51.925 net/dpaa2: not in enabled drivers build config 00:01:51.925 net/e1000: not in enabled drivers build config 00:01:51.925 net/ena: not in enabled drivers build config 00:01:51.925 net/enetc: not in enabled drivers build config 00:01:51.925 net/enetfec: not in enabled drivers build config 00:01:51.925 net/enic: not in enabled drivers build config 00:01:51.925 net/failsafe: not in enabled drivers build config 00:01:51.925 net/fm10k: not in enabled drivers build config 00:01:51.925 net/gve: not in enabled drivers build config 00:01:51.925 net/hinic: not in enabled drivers build config 00:01:51.925 net/hns3: not in enabled drivers build config 00:01:51.925 net/i40e: not in enabled drivers build config 00:01:51.925 net/iavf: not in enabled drivers build config 00:01:51.925 net/ice: not in enabled drivers build config 00:01:51.925 net/idpf: not in enabled drivers build config 00:01:51.925 net/igc: not in enabled drivers build config 00:01:51.925 net/ionic: not in enabled drivers build config 00:01:51.925 net/ipn3ke: not in enabled drivers build config 00:01:51.925 net/ixgbe: not in enabled drivers build config 00:01:51.925 net/mana: not in enabled drivers build config 00:01:51.925 net/memif: not in enabled drivers build config 00:01:51.925 net/mlx4: not in enabled drivers build config 00:01:51.925 net/mlx5: not in enabled drivers build config 00:01:51.925 net/mvneta: not in enabled drivers build config 00:01:51.925 net/mvpp2: not in enabled drivers build config 00:01:51.925 net/netvsc: not in enabled drivers build config 00:01:51.925 net/nfb: not in enabled drivers build config 00:01:51.925 net/nfp: not in enabled drivers build config 00:01:51.925 net/ngbe: not in enabled drivers build config 00:01:51.925 net/null: not in enabled drivers build config 00:01:51.925 net/octeontx: not in enabled drivers build config 00:01:51.925 net/octeon_ep: not in enabled drivers build config 00:01:51.925 net/pcap: not in enabled drivers build config 00:01:51.925 net/pfe: not in enabled drivers build config 00:01:51.925 net/qede: not in enabled drivers build config 00:01:51.925 net/ring: not in enabled drivers build config 00:01:51.925 net/sfc: not in enabled drivers build config 00:01:51.925 net/softnic: not in enabled drivers build config 00:01:51.925 net/tap: not in enabled drivers build config 00:01:51.925 net/thunderx: not in enabled drivers build config 00:01:51.925 net/txgbe: not in enabled drivers build config 00:01:51.925 net/vdev_netvsc: not in enabled drivers build config 00:01:51.925 net/vhost: not in enabled drivers build config 00:01:51.925 net/virtio: not in enabled drivers build config 00:01:51.925 net/vmxnet3: not in enabled drivers build config 00:01:51.925 raw/*: missing internal dependency, "rawdev" 00:01:51.925 crypto/armv8: not in enabled drivers build config 00:01:51.925 crypto/bcmfs: not in enabled drivers build config 00:01:51.925 crypto/caam_jr: not in enabled drivers build config 00:01:51.925 crypto/ccp: not in enabled drivers build config 00:01:51.925 crypto/cnxk: not in enabled drivers build config 00:01:51.925 crypto/dpaa_sec: not in enabled drivers build config 00:01:51.925 crypto/dpaa2_sec: not in enabled drivers build config 00:01:51.925 crypto/ipsec_mb: not in enabled drivers build config 00:01:51.925 crypto/mlx5: not in enabled drivers build config 00:01:51.925 crypto/mvsam: not in enabled drivers build config 00:01:51.925 crypto/nitrox: not in enabled drivers build config 00:01:51.925 crypto/null: not in enabled drivers build config 00:01:51.925 crypto/octeontx: not in enabled drivers build config 00:01:51.925 crypto/openssl: not in enabled drivers build config 00:01:51.925 crypto/scheduler: not in enabled drivers build config 00:01:51.925 crypto/uadk: not in enabled drivers build config 00:01:51.925 crypto/virtio: not in enabled drivers build config 00:01:51.925 compress/isal: not in enabled drivers build config 00:01:51.925 compress/mlx5: not in enabled drivers build config 00:01:51.925 compress/octeontx: not in enabled drivers build config 00:01:51.925 compress/zlib: not in enabled drivers build config 00:01:51.925 regex/*: missing internal dependency, "regexdev" 00:01:51.925 ml/*: missing internal dependency, "mldev" 00:01:51.925 vdpa/ifc: not in enabled drivers build config 00:01:51.925 vdpa/mlx5: not in enabled drivers build config 00:01:51.925 vdpa/nfp: not in enabled drivers build config 00:01:51.925 vdpa/sfc: not in enabled drivers build config 00:01:51.925 event/*: missing internal dependency, "eventdev" 00:01:51.925 baseband/*: missing internal dependency, "bbdev" 00:01:51.925 gpu/*: missing internal dependency, "gpudev" 00:01:51.925 00:01:51.925 00:01:51.925 Build targets in project: 85 00:01:51.925 00:01:51.925 DPDK 23.11.0 00:01:51.925 00:01:51.925 User defined options 00:01:51.925 buildtype : debug 00:01:51.925 default_library : shared 00:01:51.925 libdir : lib 00:01:51.925 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:51.925 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:51.925 c_link_args : 00:01:51.925 cpu_instruction_set: native 00:01:51.925 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:51.925 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:51.925 enable_docs : false 00:01:51.925 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:51.925 enable_kmods : false 00:01:51.925 tests : false 00:01:51.925 00:01:51.925 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:51.926 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:51.926 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:51.926 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:51.926 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:51.926 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:51.926 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:51.926 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:51.926 [7/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:51.926 [8/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:51.926 [9/265] Linking static target lib/librte_kvargs.a 00:01:51.926 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:51.926 [11/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:51.926 [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:51.926 [13/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:51.926 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:51.926 [15/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:51.926 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:51.926 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:51.926 [18/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:51.926 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:51.926 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:51.926 [21/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:51.926 [22/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:51.926 [23/265] Linking static target lib/librte_log.a 00:01:51.926 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:51.926 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:51.926 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:51.926 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:51.926 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:51.926 [29/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:51.926 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:51.926 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:51.926 [32/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:51.926 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:51.926 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:51.926 [35/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:51.926 [36/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.926 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:51.926 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:51.926 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:51.926 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:51.926 [41/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.926 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:51.926 [43/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:51.926 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:51.926 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:51.926 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:51.926 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:51.926 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:51.926 [49/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:51.926 [50/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.926 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:51.926 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:51.926 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:51.926 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:51.926 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:51.926 [56/265] Linking static target lib/librte_ring.a 00:01:51.926 [57/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:51.926 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:51.926 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:51.926 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:51.926 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:51.926 [62/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.926 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:51.926 [64/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.926 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:51.926 [66/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.926 [67/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.926 [68/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.926 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:51.926 [70/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.926 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:51.926 [72/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.926 [73/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.926 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:51.926 [75/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.926 [76/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:51.926 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.926 [78/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.926 [79/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.926 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.926 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.926 [82/265] Linking static target lib/librte_telemetry.a 00:01:51.926 [83/265] Linking static target lib/librte_pci.a 00:01:51.926 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.926 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.926 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.926 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.926 [88/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.926 [89/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.926 [90/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:51.926 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.926 [92/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.926 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:51.926 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:51.926 [95/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.926 [96/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:51.926 [97/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.926 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.926 [99/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.926 [100/265] Linking static target lib/librte_meter.a 00:01:51.926 [101/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.926 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.926 [103/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:51.926 [104/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:51.926 [105/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:51.926 [106/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:51.926 [107/265] Linking static target lib/librte_rcu.a 00:01:51.926 [108/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:51.926 [109/265] Linking static target lib/librte_mempool.a 00:01:51.926 [110/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.926 [111/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.926 [112/265] Linking static target lib/librte_net.a 00:01:51.926 [113/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.926 [114/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:51.926 [115/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:51.926 [116/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:51.926 [117/265] Linking static target lib/librte_eal.a 00:01:51.926 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.926 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.926 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.926 [121/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.926 [122/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.926 [123/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.926 [124/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.926 [125/265] Linking target lib/librte_log.so.24.0 00:01:51.926 [126/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.926 [127/265] Linking static target lib/librte_mbuf.a 00:01:51.926 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.926 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:52.187 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:52.188 [131/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.188 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.188 [133/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.188 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.188 [135/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.188 [136/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.188 [137/265] Linking static target lib/librte_cmdline.a 00:01:52.188 [138/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.188 [139/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.188 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.188 [141/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:52.188 [142/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:52.188 [143/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:52.188 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.188 [145/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:52.188 [146/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:52.188 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.188 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.188 [149/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.188 [150/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.188 [151/265] Linking static target lib/librte_timer.a 00:01:52.188 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.188 [153/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:52.188 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.188 [155/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:52.188 [156/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:52.188 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.188 [158/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.188 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.188 [160/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:52.188 [161/265] Linking static target lib/librte_compressdev.a 00:01:52.188 [162/265] Linking target lib/librte_kvargs.so.24.0 00:01:52.188 [163/265] Linking target lib/librte_telemetry.so.24.0 00:01:52.188 [164/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:52.188 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:52.188 [166/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.188 [167/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:52.188 [168/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.188 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:52.188 [170/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.188 [171/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:52.188 [172/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.188 [173/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:52.188 [174/265] Linking static target lib/librte_dmadev.a 00:01:52.188 [175/265] Linking static target lib/librte_reorder.a 00:01:52.188 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:52.188 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:52.188 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:52.188 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:52.188 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:52.188 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:52.188 [182/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:52.188 [183/265] Linking static target lib/librte_power.a 00:01:52.447 [184/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:52.447 [185/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:52.447 [186/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:52.447 [187/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.447 [188/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:52.447 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:52.447 [190/265] Linking static target lib/librte_security.a 00:01:52.447 [191/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.447 [192/265] Linking static target lib/librte_hash.a 00:01:52.447 [193/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:52.447 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.447 [195/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.447 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:52.447 [197/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:52.447 [198/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.447 [199/265] Linking static target drivers/librte_bus_vdev.a 00:01:52.447 [200/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:52.447 [201/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:52.447 [202/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.447 [203/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:52.447 [204/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.447 [205/265] Linking static target drivers/librte_bus_pci.a 00:01:52.447 [206/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.447 [207/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:52.447 [208/265] Linking static target drivers/librte_mempool_ring.a 00:01:52.447 [209/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.706 [210/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.706 [211/265] Linking static target lib/librte_cryptodev.a 00:01:52.706 [212/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.706 [213/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.706 [214/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.965 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.965 [216/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.965 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.965 [218/265] Linking static target lib/librte_ethdev.a 00:01:52.965 [219/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.223 [220/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:53.223 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.223 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.223 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.223 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.157 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:54.158 [226/265] Linking static target lib/librte_vhost.a 00:01:54.727 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.102 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.661 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.597 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.597 [231/265] Linking target lib/librte_eal.so.24.0 00:02:03.856 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:03.856 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:03.856 [234/265] Linking target lib/librte_pci.so.24.0 00:02:03.856 [235/265] Linking target lib/librte_timer.so.24.0 00:02:03.856 [236/265] Linking target lib/librte_ring.so.24.0 00:02:03.856 [237/265] Linking target lib/librte_dmadev.so.24.0 00:02:03.856 [238/265] Linking target lib/librte_meter.so.24.0 00:02:03.856 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:03.856 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:03.856 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:03.856 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:03.856 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:03.856 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:04.115 [245/265] Linking target lib/librte_rcu.so.24.0 00:02:04.115 [246/265] Linking target lib/librte_mempool.so.24.0 00:02:04.115 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:04.115 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:04.115 [249/265] Linking target lib/librte_mbuf.so.24.0 00:02:04.115 [250/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:04.374 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:04.374 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:04.374 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:02:04.374 [254/265] Linking target lib/librte_compressdev.so.24.0 00:02:04.374 [255/265] Linking target lib/librte_net.so.24.0 00:02:04.374 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:04.632 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:04.632 [258/265] Linking target lib/librte_hash.so.24.0 00:02:04.632 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:04.632 [260/265] Linking target lib/librte_security.so.24.0 00:02:04.632 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:04.632 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:04.632 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:04.891 [264/265] Linking target lib/librte_power.so.24.0 00:02:04.891 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:04.891 INFO: autodetecting backend as ninja 00:02:04.891 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:05.825 CC lib/log/log.o 00:02:05.825 CC lib/log/log_flags.o 00:02:05.825 CC lib/log/log_deprecated.o 00:02:05.825 CC lib/ut_mock/mock.o 00:02:05.825 CC lib/ut/ut.o 00:02:06.083 LIB libspdk_log.a 00:02:06.083 LIB libspdk_ut_mock.a 00:02:06.083 SO libspdk_log.so.7.0 00:02:06.083 LIB libspdk_ut.a 00:02:06.083 SO libspdk_ut_mock.so.6.0 00:02:06.083 SO libspdk_ut.so.2.0 00:02:06.083 SYMLINK libspdk_log.so 00:02:06.083 SYMLINK libspdk_ut_mock.so 00:02:06.083 SYMLINK libspdk_ut.so 00:02:06.343 CC lib/util/base64.o 00:02:06.343 CC lib/util/bit_array.o 00:02:06.343 CC lib/util/cpuset.o 00:02:06.343 CC lib/util/crc16.o 00:02:06.343 CC lib/util/crc32.o 00:02:06.343 CC lib/util/crc32_ieee.o 00:02:06.343 CC lib/util/crc32c.o 00:02:06.343 CC lib/util/crc64.o 00:02:06.343 CC lib/util/dif.o 00:02:06.343 CC lib/util/file.o 00:02:06.343 CC lib/util/fd.o 00:02:06.343 CC lib/util/hexlify.o 00:02:06.343 CC lib/ioat/ioat.o 00:02:06.343 CC lib/util/pipe.o 00:02:06.343 CC lib/util/iov.o 00:02:06.343 CC lib/util/math.o 00:02:06.343 CC lib/dma/dma.o 00:02:06.343 CC lib/util/strerror_tls.o 00:02:06.343 CXX lib/trace_parser/trace.o 00:02:06.343 CC lib/util/fd_group.o 00:02:06.343 CC lib/util/string.o 00:02:06.343 CC lib/util/uuid.o 00:02:06.343 CC lib/util/xor.o 00:02:06.343 CC lib/util/zipf.o 00:02:06.602 CC lib/vfio_user/host/vfio_user.o 00:02:06.602 CC lib/vfio_user/host/vfio_user_pci.o 00:02:06.602 LIB libspdk_dma.a 00:02:06.602 SO libspdk_dma.so.4.0 00:02:06.602 LIB libspdk_ioat.a 00:02:06.861 SO libspdk_ioat.so.7.0 00:02:06.861 SYMLINK libspdk_dma.so 00:02:06.861 SYMLINK libspdk_ioat.so 00:02:06.861 LIB libspdk_vfio_user.a 00:02:06.861 SO libspdk_vfio_user.so.5.0 00:02:06.861 LIB libspdk_util.a 00:02:06.861 SYMLINK libspdk_vfio_user.so 00:02:06.861 SO libspdk_util.so.9.0 00:02:07.120 SYMLINK libspdk_util.so 00:02:07.120 LIB libspdk_trace_parser.a 00:02:07.120 SO libspdk_trace_parser.so.5.0 00:02:07.377 SYMLINK libspdk_trace_parser.so 00:02:07.377 CC lib/json/json_parse.o 00:02:07.377 CC lib/json/json_write.o 00:02:07.377 CC lib/json/json_util.o 00:02:07.377 CC lib/conf/conf.o 00:02:07.377 CC lib/idxd/idxd.o 00:02:07.377 CC lib/idxd/idxd_user.o 00:02:07.377 CC lib/rdma/common.o 00:02:07.377 CC lib/rdma/rdma_verbs.o 00:02:07.377 CC lib/env_dpdk/env.o 00:02:07.377 CC lib/env_dpdk/memory.o 00:02:07.377 CC lib/env_dpdk/init.o 00:02:07.377 CC lib/env_dpdk/pci.o 00:02:07.377 CC lib/vmd/vmd.o 00:02:07.377 CC lib/env_dpdk/threads.o 00:02:07.377 CC lib/env_dpdk/pci_ioat.o 00:02:07.377 CC lib/vmd/led.o 00:02:07.377 CC lib/env_dpdk/pci_idxd.o 00:02:07.377 CC lib/env_dpdk/pci_virtio.o 00:02:07.377 CC lib/env_dpdk/pci_vmd.o 00:02:07.377 CC lib/env_dpdk/sigbus_handler.o 00:02:07.377 CC lib/env_dpdk/pci_event.o 00:02:07.377 CC lib/env_dpdk/pci_dpdk.o 00:02:07.377 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:07.377 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:07.635 LIB libspdk_conf.a 00:02:07.635 LIB libspdk_json.a 00:02:07.635 SO libspdk_conf.so.6.0 00:02:07.635 SO libspdk_json.so.6.0 00:02:07.635 LIB libspdk_rdma.a 00:02:07.893 SYMLINK libspdk_conf.so 00:02:07.893 SO libspdk_rdma.so.6.0 00:02:07.893 SYMLINK libspdk_json.so 00:02:07.893 SYMLINK libspdk_rdma.so 00:02:07.893 LIB libspdk_idxd.a 00:02:07.893 SO libspdk_idxd.so.12.0 00:02:07.893 SYMLINK libspdk_idxd.so 00:02:07.893 LIB libspdk_vmd.a 00:02:08.151 SO libspdk_vmd.so.6.0 00:02:08.151 SYMLINK libspdk_vmd.so 00:02:08.151 CC lib/jsonrpc/jsonrpc_server.o 00:02:08.151 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:08.151 CC lib/jsonrpc/jsonrpc_client.o 00:02:08.151 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:08.409 LIB libspdk_jsonrpc.a 00:02:08.409 SO libspdk_jsonrpc.so.6.0 00:02:08.409 SYMLINK libspdk_jsonrpc.so 00:02:08.409 LIB libspdk_env_dpdk.a 00:02:08.667 SO libspdk_env_dpdk.so.14.0 00:02:08.667 SYMLINK libspdk_env_dpdk.so 00:02:08.667 CC lib/rpc/rpc.o 00:02:08.924 LIB libspdk_rpc.a 00:02:08.925 SO libspdk_rpc.so.6.0 00:02:09.182 SYMLINK libspdk_rpc.so 00:02:09.440 CC lib/keyring/keyring.o 00:02:09.440 CC lib/keyring/keyring_rpc.o 00:02:09.440 CC lib/trace/trace.o 00:02:09.440 CC lib/notify/notify.o 00:02:09.440 CC lib/trace/trace_flags.o 00:02:09.440 CC lib/notify/notify_rpc.o 00:02:09.440 CC lib/trace/trace_rpc.o 00:02:09.699 LIB libspdk_keyring.a 00:02:09.699 LIB libspdk_notify.a 00:02:09.699 SO libspdk_keyring.so.1.0 00:02:09.699 SO libspdk_notify.so.6.0 00:02:09.699 LIB libspdk_trace.a 00:02:09.699 SYMLINK libspdk_keyring.so 00:02:09.699 SO libspdk_trace.so.10.0 00:02:09.699 SYMLINK libspdk_notify.so 00:02:09.699 SYMLINK libspdk_trace.so 00:02:10.264 CC lib/thread/thread.o 00:02:10.264 CC lib/thread/iobuf.o 00:02:10.264 CC lib/sock/sock.o 00:02:10.264 CC lib/sock/sock_rpc.o 00:02:10.523 LIB libspdk_sock.a 00:02:10.523 SO libspdk_sock.so.9.0 00:02:10.523 SYMLINK libspdk_sock.so 00:02:11.089 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:11.089 CC lib/nvme/nvme_fabric.o 00:02:11.089 CC lib/nvme/nvme_ctrlr.o 00:02:11.089 CC lib/nvme/nvme_ns_cmd.o 00:02:11.089 CC lib/nvme/nvme_ns.o 00:02:11.089 CC lib/nvme/nvme_pcie_common.o 00:02:11.089 CC lib/nvme/nvme_pcie.o 00:02:11.089 CC lib/nvme/nvme_qpair.o 00:02:11.089 CC lib/nvme/nvme.o 00:02:11.089 CC lib/nvme/nvme_quirks.o 00:02:11.089 CC lib/nvme/nvme_transport.o 00:02:11.089 CC lib/nvme/nvme_discovery.o 00:02:11.089 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:11.089 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:11.089 CC lib/nvme/nvme_io_msg.o 00:02:11.089 CC lib/nvme/nvme_tcp.o 00:02:11.089 CC lib/nvme/nvme_poll_group.o 00:02:11.089 CC lib/nvme/nvme_opal.o 00:02:11.089 CC lib/nvme/nvme_stubs.o 00:02:11.089 CC lib/nvme/nvme_zns.o 00:02:11.089 CC lib/nvme/nvme_cuse.o 00:02:11.089 CC lib/nvme/nvme_auth.o 00:02:11.089 CC lib/nvme/nvme_rdma.o 00:02:11.089 LIB libspdk_thread.a 00:02:11.346 SO libspdk_thread.so.10.0 00:02:11.346 SYMLINK libspdk_thread.so 00:02:11.605 CC lib/blob/blobstore.o 00:02:11.605 CC lib/init/json_config.o 00:02:11.605 CC lib/init/subsystem.o 00:02:11.605 CC lib/blob/zeroes.o 00:02:11.605 CC lib/blob/request.o 00:02:11.605 CC lib/init/rpc.o 00:02:11.605 CC lib/init/subsystem_rpc.o 00:02:11.605 CC lib/blob/blob_bs_dev.o 00:02:11.605 CC lib/virtio/virtio.o 00:02:11.605 CC lib/virtio/virtio_vhost_user.o 00:02:11.605 CC lib/virtio/virtio_pci.o 00:02:11.605 CC lib/virtio/virtio_vfio_user.o 00:02:11.605 CC lib/accel/accel.o 00:02:11.605 CC lib/accel/accel_rpc.o 00:02:11.605 CC lib/accel/accel_sw.o 00:02:11.862 LIB libspdk_init.a 00:02:11.862 SO libspdk_init.so.5.0 00:02:11.862 LIB libspdk_virtio.a 00:02:11.862 SO libspdk_virtio.so.7.0 00:02:11.862 SYMLINK libspdk_init.so 00:02:12.120 SYMLINK libspdk_virtio.so 00:02:12.378 CC lib/event/app.o 00:02:12.378 CC lib/event/reactor.o 00:02:12.378 CC lib/event/log_rpc.o 00:02:12.378 CC lib/event/app_rpc.o 00:02:12.378 CC lib/event/scheduler_static.o 00:02:12.378 LIB libspdk_accel.a 00:02:12.378 SO libspdk_accel.so.15.0 00:02:12.378 SYMLINK libspdk_accel.so 00:02:12.635 LIB libspdk_event.a 00:02:12.635 SO libspdk_event.so.13.0 00:02:12.635 LIB libspdk_nvme.a 00:02:12.635 SYMLINK libspdk_event.so 00:02:12.893 SO libspdk_nvme.so.13.0 00:02:12.893 CC lib/bdev/bdev.o 00:02:12.893 CC lib/bdev/bdev_rpc.o 00:02:12.893 CC lib/bdev/bdev_zone.o 00:02:12.893 CC lib/bdev/part.o 00:02:12.893 CC lib/bdev/scsi_nvme.o 00:02:13.150 SYMLINK libspdk_nvme.so 00:02:13.715 LIB libspdk_blob.a 00:02:13.715 SO libspdk_blob.so.11.0 00:02:13.715 SYMLINK libspdk_blob.so 00:02:13.973 CC lib/lvol/lvol.o 00:02:14.230 CC lib/blobfs/blobfs.o 00:02:14.230 CC lib/blobfs/tree.o 00:02:14.795 LIB libspdk_bdev.a 00:02:14.795 SO libspdk_bdev.so.15.0 00:02:14.795 LIB libspdk_blobfs.a 00:02:14.795 LIB libspdk_lvol.a 00:02:14.795 SO libspdk_blobfs.so.10.0 00:02:14.795 SYMLINK libspdk_bdev.so 00:02:14.795 SO libspdk_lvol.so.10.0 00:02:14.795 SYMLINK libspdk_blobfs.so 00:02:14.795 SYMLINK libspdk_lvol.so 00:02:15.053 CC lib/scsi/dev.o 00:02:15.053 CC lib/scsi/lun.o 00:02:15.053 CC lib/scsi/scsi.o 00:02:15.053 CC lib/scsi/scsi_bdev.o 00:02:15.053 CC lib/scsi/port.o 00:02:15.053 CC lib/scsi/scsi_pr.o 00:02:15.053 CC lib/scsi/scsi_rpc.o 00:02:15.053 CC lib/scsi/task.o 00:02:15.053 CC lib/ftl/ftl_core.o 00:02:15.053 CC lib/nvmf/ctrlr.o 00:02:15.053 CC lib/ftl/ftl_layout.o 00:02:15.053 CC lib/ftl/ftl_init.o 00:02:15.053 CC lib/nvmf/ctrlr_discovery.o 00:02:15.053 CC lib/ftl/ftl_io.o 00:02:15.053 CC lib/ftl/ftl_sb.o 00:02:15.053 CC lib/nvmf/ctrlr_bdev.o 00:02:15.053 CC lib/ftl/ftl_debug.o 00:02:15.053 CC lib/ublk/ublk_rpc.o 00:02:15.053 CC lib/nvmf/subsystem.o 00:02:15.053 CC lib/ublk/ublk.o 00:02:15.053 CC lib/nvmf/nvmf.o 00:02:15.053 CC lib/ftl/ftl_l2p.o 00:02:15.053 CC lib/ftl/ftl_l2p_flat.o 00:02:15.053 CC lib/ftl/ftl_band_ops.o 00:02:15.053 CC lib/nvmf/nvmf_rpc.o 00:02:15.053 CC lib/ftl/ftl_nv_cache.o 00:02:15.053 CC lib/ftl/ftl_band.o 00:02:15.053 CC lib/nvmf/transport.o 00:02:15.053 CC lib/ftl/ftl_writer.o 00:02:15.053 CC lib/ftl/ftl_rq.o 00:02:15.053 CC lib/nvmf/tcp.o 00:02:15.053 CC lib/nvmf/rdma.o 00:02:15.053 CC lib/ftl/ftl_l2p_cache.o 00:02:15.053 CC lib/ftl/ftl_reloc.o 00:02:15.053 CC lib/ftl/ftl_p2l.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:15.053 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:15.053 CC lib/ftl/utils/ftl_md.o 00:02:15.053 CC lib/nbd/nbd.o 00:02:15.053 CC lib/ftl/utils/ftl_conf.o 00:02:15.053 CC lib/nbd/nbd_rpc.o 00:02:15.053 CC lib/ftl/utils/ftl_mempool.o 00:02:15.053 CC lib/ftl/utils/ftl_property.o 00:02:15.053 CC lib/ftl/utils/ftl_bitmap.o 00:02:15.053 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:15.053 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:15.053 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:15.053 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:15.053 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:15.053 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:15.053 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:15.053 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:15.053 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:15.053 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:15.053 CC lib/ftl/base/ftl_base_dev.o 00:02:15.053 CC lib/ftl/base/ftl_base_bdev.o 00:02:15.053 CC lib/ftl/ftl_trace.o 00:02:15.620 LIB libspdk_scsi.a 00:02:15.620 LIB libspdk_nbd.a 00:02:15.879 LIB libspdk_ublk.a 00:02:15.879 SO libspdk_nbd.so.7.0 00:02:15.879 SO libspdk_scsi.so.9.0 00:02:15.879 SO libspdk_ublk.so.3.0 00:02:15.879 SYMLINK libspdk_nbd.so 00:02:15.879 SYMLINK libspdk_scsi.so 00:02:15.879 SYMLINK libspdk_ublk.so 00:02:16.138 LIB libspdk_ftl.a 00:02:16.138 SO libspdk_ftl.so.9.0 00:02:16.138 CC lib/vhost/vhost.o 00:02:16.138 CC lib/vhost/vhost_scsi.o 00:02:16.138 CC lib/vhost/vhost_rpc.o 00:02:16.138 CC lib/vhost/rte_vhost_user.o 00:02:16.138 CC lib/vhost/vhost_blk.o 00:02:16.138 CC lib/iscsi/conn.o 00:02:16.138 CC lib/iscsi/init_grp.o 00:02:16.138 CC lib/iscsi/param.o 00:02:16.138 CC lib/iscsi/iscsi.o 00:02:16.138 CC lib/iscsi/md5.o 00:02:16.138 CC lib/iscsi/portal_grp.o 00:02:16.138 CC lib/iscsi/tgt_node.o 00:02:16.138 CC lib/iscsi/iscsi_subsystem.o 00:02:16.138 CC lib/iscsi/iscsi_rpc.o 00:02:16.138 CC lib/iscsi/task.o 00:02:16.397 SYMLINK libspdk_ftl.so 00:02:16.966 LIB libspdk_nvmf.a 00:02:16.966 SO libspdk_nvmf.so.18.0 00:02:16.966 LIB libspdk_vhost.a 00:02:16.966 SYMLINK libspdk_nvmf.so 00:02:17.226 SO libspdk_vhost.so.8.0 00:02:17.226 SYMLINK libspdk_vhost.so 00:02:17.226 LIB libspdk_iscsi.a 00:02:17.226 SO libspdk_iscsi.so.8.0 00:02:17.486 SYMLINK libspdk_iscsi.so 00:02:18.056 CC module/env_dpdk/env_dpdk_rpc.o 00:02:18.056 LIB libspdk_env_dpdk_rpc.a 00:02:18.056 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:18.056 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:18.056 CC module/accel/error/accel_error.o 00:02:18.056 CC module/accel/dsa/accel_dsa.o 00:02:18.056 CC module/accel/error/accel_error_rpc.o 00:02:18.056 CC module/accel/dsa/accel_dsa_rpc.o 00:02:18.056 CC module/blob/bdev/blob_bdev.o 00:02:18.056 SO libspdk_env_dpdk_rpc.so.6.0 00:02:18.056 CC module/accel/iaa/accel_iaa.o 00:02:18.056 CC module/accel/iaa/accel_iaa_rpc.o 00:02:18.056 CC module/sock/posix/posix.o 00:02:18.056 CC module/scheduler/gscheduler/gscheduler.o 00:02:18.056 CC module/keyring/file/keyring.o 00:02:18.056 CC module/keyring/file/keyring_rpc.o 00:02:18.056 CC module/accel/ioat/accel_ioat.o 00:02:18.056 CC module/accel/ioat/accel_ioat_rpc.o 00:02:18.316 SYMLINK libspdk_env_dpdk_rpc.so 00:02:18.316 LIB libspdk_scheduler_gscheduler.a 00:02:18.316 LIB libspdk_scheduler_dpdk_governor.a 00:02:18.316 LIB libspdk_accel_error.a 00:02:18.316 LIB libspdk_scheduler_dynamic.a 00:02:18.316 LIB libspdk_keyring_file.a 00:02:18.316 SO libspdk_scheduler_gscheduler.so.4.0 00:02:18.316 SO libspdk_accel_error.so.2.0 00:02:18.316 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:18.316 SO libspdk_scheduler_dynamic.so.4.0 00:02:18.316 LIB libspdk_accel_ioat.a 00:02:18.316 LIB libspdk_accel_iaa.a 00:02:18.316 SO libspdk_keyring_file.so.1.0 00:02:18.316 LIB libspdk_accel_dsa.a 00:02:18.316 LIB libspdk_blob_bdev.a 00:02:18.316 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:18.316 SYMLINK libspdk_scheduler_gscheduler.so 00:02:18.316 SO libspdk_accel_ioat.so.6.0 00:02:18.316 SO libspdk_accel_iaa.so.3.0 00:02:18.316 SYMLINK libspdk_scheduler_dynamic.so 00:02:18.316 SYMLINK libspdk_accel_error.so 00:02:18.316 SO libspdk_accel_dsa.so.5.0 00:02:18.316 SYMLINK libspdk_keyring_file.so 00:02:18.316 SO libspdk_blob_bdev.so.11.0 00:02:18.574 SYMLINK libspdk_accel_iaa.so 00:02:18.574 SYMLINK libspdk_accel_ioat.so 00:02:18.574 SYMLINK libspdk_accel_dsa.so 00:02:18.574 SYMLINK libspdk_blob_bdev.so 00:02:18.832 LIB libspdk_sock_posix.a 00:02:18.832 SO libspdk_sock_posix.so.6.0 00:02:18.832 SYMLINK libspdk_sock_posix.so 00:02:18.832 CC module/bdev/delay/vbdev_delay.o 00:02:18.832 CC module/bdev/gpt/gpt.o 00:02:18.832 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:18.832 CC module/bdev/gpt/vbdev_gpt.o 00:02:18.832 CC module/blobfs/bdev/blobfs_bdev.o 00:02:18.832 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:18.832 CC module/bdev/split/vbdev_split.o 00:02:18.832 CC module/bdev/error/vbdev_error_rpc.o 00:02:18.832 CC module/bdev/split/vbdev_split_rpc.o 00:02:18.832 CC module/bdev/error/vbdev_error.o 00:02:18.832 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:18.832 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:18.832 CC module/bdev/lvol/vbdev_lvol.o 00:02:18.832 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:18.832 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:18.832 CC module/bdev/malloc/bdev_malloc.o 00:02:18.832 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:18.832 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:18.832 CC module/bdev/ftl/bdev_ftl.o 00:02:18.832 CC module/bdev/aio/bdev_aio_rpc.o 00:02:18.832 CC module/bdev/aio/bdev_aio.o 00:02:18.832 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:18.832 CC module/bdev/passthru/vbdev_passthru.o 00:02:18.832 CC module/bdev/null/bdev_null.o 00:02:18.832 CC module/bdev/null/bdev_null_rpc.o 00:02:18.832 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:18.832 CC module/bdev/nvme/bdev_nvme.o 00:02:18.832 CC module/bdev/nvme/bdev_mdns_client.o 00:02:18.832 CC module/bdev/nvme/nvme_rpc.o 00:02:19.090 CC module/bdev/nvme/vbdev_opal.o 00:02:19.090 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:19.090 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:19.090 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:19.090 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:19.090 CC module/bdev/iscsi/bdev_iscsi.o 00:02:19.090 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:19.090 CC module/bdev/raid/bdev_raid_rpc.o 00:02:19.090 CC module/bdev/raid/bdev_raid_sb.o 00:02:19.090 CC module/bdev/raid/bdev_raid.o 00:02:19.090 CC module/bdev/raid/raid0.o 00:02:19.090 CC module/bdev/raid/raid1.o 00:02:19.090 CC module/bdev/raid/concat.o 00:02:19.090 LIB libspdk_bdev_split.a 00:02:19.090 LIB libspdk_blobfs_bdev.a 00:02:19.090 SO libspdk_bdev_split.so.6.0 00:02:19.090 LIB libspdk_bdev_error.a 00:02:19.090 SO libspdk_blobfs_bdev.so.6.0 00:02:19.349 LIB libspdk_bdev_null.a 00:02:19.349 LIB libspdk_bdev_passthru.a 00:02:19.349 SO libspdk_bdev_error.so.6.0 00:02:19.349 SYMLINK libspdk_bdev_split.so 00:02:19.349 LIB libspdk_bdev_delay.a 00:02:19.349 SO libspdk_bdev_null.so.6.0 00:02:19.349 SO libspdk_bdev_passthru.so.6.0 00:02:19.349 LIB libspdk_bdev_gpt.a 00:02:19.349 LIB libspdk_bdev_aio.a 00:02:19.349 SYMLINK libspdk_blobfs_bdev.so 00:02:19.349 LIB libspdk_bdev_zone_block.a 00:02:19.349 SO libspdk_bdev_delay.so.6.0 00:02:19.349 LIB libspdk_bdev_malloc.a 00:02:19.349 SO libspdk_bdev_gpt.so.6.0 00:02:19.349 SO libspdk_bdev_aio.so.6.0 00:02:19.349 SYMLINK libspdk_bdev_error.so 00:02:19.349 SO libspdk_bdev_zone_block.so.6.0 00:02:19.349 SYMLINK libspdk_bdev_passthru.so 00:02:19.349 SYMLINK libspdk_bdev_null.so 00:02:19.349 LIB libspdk_bdev_iscsi.a 00:02:19.349 LIB libspdk_bdev_ftl.a 00:02:19.349 SO libspdk_bdev_malloc.so.6.0 00:02:19.349 SO libspdk_bdev_iscsi.so.6.0 00:02:19.349 SYMLINK libspdk_bdev_delay.so 00:02:19.349 SYMLINK libspdk_bdev_aio.so 00:02:19.349 SO libspdk_bdev_ftl.so.6.0 00:02:19.349 SYMLINK libspdk_bdev_gpt.so 00:02:19.349 SYMLINK libspdk_bdev_zone_block.so 00:02:19.349 LIB libspdk_bdev_lvol.a 00:02:19.349 SYMLINK libspdk_bdev_malloc.so 00:02:19.349 SYMLINK libspdk_bdev_iscsi.so 00:02:19.349 SYMLINK libspdk_bdev_ftl.so 00:02:19.349 SO libspdk_bdev_lvol.so.6.0 00:02:19.608 LIB libspdk_bdev_virtio.a 00:02:19.608 SO libspdk_bdev_virtio.so.6.0 00:02:19.608 SYMLINK libspdk_bdev_lvol.so 00:02:19.608 SYMLINK libspdk_bdev_virtio.so 00:02:19.868 LIB libspdk_bdev_raid.a 00:02:19.868 SO libspdk_bdev_raid.so.6.0 00:02:19.868 SYMLINK libspdk_bdev_raid.so 00:02:20.437 LIB libspdk_bdev_nvme.a 00:02:20.697 SO libspdk_bdev_nvme.so.7.0 00:02:20.697 SYMLINK libspdk_bdev_nvme.so 00:02:21.266 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:21.266 CC module/event/subsystems/iobuf/iobuf.o 00:02:21.266 CC module/event/subsystems/vmd/vmd.o 00:02:21.525 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:21.525 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:21.525 CC module/event/subsystems/keyring/keyring.o 00:02:21.525 CC module/event/subsystems/scheduler/scheduler.o 00:02:21.525 CC module/event/subsystems/sock/sock.o 00:02:21.525 LIB libspdk_event_vhost_blk.a 00:02:21.525 LIB libspdk_event_keyring.a 00:02:21.525 LIB libspdk_event_sock.a 00:02:21.525 LIB libspdk_event_iobuf.a 00:02:21.525 LIB libspdk_event_scheduler.a 00:02:21.525 LIB libspdk_event_vmd.a 00:02:21.525 SO libspdk_event_vhost_blk.so.3.0 00:02:21.525 SO libspdk_event_keyring.so.1.0 00:02:21.525 SO libspdk_event_sock.so.5.0 00:02:21.525 SO libspdk_event_vmd.so.6.0 00:02:21.525 SO libspdk_event_scheduler.so.4.0 00:02:21.525 SO libspdk_event_iobuf.so.3.0 00:02:21.525 SYMLINK libspdk_event_vhost_blk.so 00:02:21.525 SYMLINK libspdk_event_keyring.so 00:02:21.525 SYMLINK libspdk_event_scheduler.so 00:02:21.785 SYMLINK libspdk_event_sock.so 00:02:21.785 SYMLINK libspdk_event_iobuf.so 00:02:21.785 SYMLINK libspdk_event_vmd.so 00:02:22.044 CC module/event/subsystems/accel/accel.o 00:02:22.044 LIB libspdk_event_accel.a 00:02:22.480 SO libspdk_event_accel.so.6.0 00:02:22.480 SYMLINK libspdk_event_accel.so 00:02:22.769 CC module/event/subsystems/bdev/bdev.o 00:02:22.769 LIB libspdk_event_bdev.a 00:02:22.769 SO libspdk_event_bdev.so.6.0 00:02:23.069 SYMLINK libspdk_event_bdev.so 00:02:23.070 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:23.354 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:23.354 CC module/event/subsystems/nbd/nbd.o 00:02:23.354 CC module/event/subsystems/scsi/scsi.o 00:02:23.354 CC module/event/subsystems/ublk/ublk.o 00:02:23.354 LIB libspdk_event_nbd.a 00:02:23.354 SO libspdk_event_nbd.so.6.0 00:02:23.354 LIB libspdk_event_scsi.a 00:02:23.354 LIB libspdk_event_ublk.a 00:02:23.354 LIB libspdk_event_nvmf.a 00:02:23.354 SO libspdk_event_scsi.so.6.0 00:02:23.354 SO libspdk_event_nvmf.so.6.0 00:02:23.354 SO libspdk_event_ublk.so.3.0 00:02:23.354 SYMLINK libspdk_event_nbd.so 00:02:23.354 SYMLINK libspdk_event_ublk.so 00:02:23.354 SYMLINK libspdk_event_scsi.so 00:02:23.629 SYMLINK libspdk_event_nvmf.so 00:02:23.888 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:23.888 CC module/event/subsystems/iscsi/iscsi.o 00:02:23.888 LIB libspdk_event_vhost_scsi.a 00:02:23.888 SO libspdk_event_vhost_scsi.so.3.0 00:02:23.888 LIB libspdk_event_iscsi.a 00:02:23.888 SO libspdk_event_iscsi.so.6.0 00:02:24.148 SYMLINK libspdk_event_vhost_scsi.so 00:02:24.148 SYMLINK libspdk_event_iscsi.so 00:02:24.406 SO libspdk.so.6.0 00:02:24.406 SYMLINK libspdk.so 00:02:24.666 CC app/trace_record/trace_record.o 00:02:24.666 CC app/spdk_top/spdk_top.o 00:02:24.666 CC app/spdk_lspci/spdk_lspci.o 00:02:24.666 CC app/spdk_nvme_discover/discovery_aer.o 00:02:24.666 CC app/spdk_nvme_perf/perf.o 00:02:24.666 CC app/spdk_nvme_identify/identify.o 00:02:24.666 CXX app/trace/trace.o 00:02:24.666 CC test/rpc_client/rpc_client_test.o 00:02:24.666 TEST_HEADER include/spdk/accel.h 00:02:24.666 TEST_HEADER include/spdk/accel_module.h 00:02:24.666 TEST_HEADER include/spdk/assert.h 00:02:24.666 TEST_HEADER include/spdk/barrier.h 00:02:24.666 TEST_HEADER include/spdk/base64.h 00:02:24.666 TEST_HEADER include/spdk/bdev.h 00:02:24.666 TEST_HEADER include/spdk/bdev_module.h 00:02:24.666 TEST_HEADER include/spdk/bdev_zone.h 00:02:24.666 TEST_HEADER include/spdk/bit_array.h 00:02:24.666 TEST_HEADER include/spdk/bit_pool.h 00:02:24.666 TEST_HEADER include/spdk/blob_bdev.h 00:02:24.666 TEST_HEADER include/spdk/blobfs.h 00:02:24.666 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:24.666 CC app/spdk_dd/spdk_dd.o 00:02:24.666 CC app/iscsi_tgt/iscsi_tgt.o 00:02:24.666 TEST_HEADER include/spdk/blob.h 00:02:24.666 TEST_HEADER include/spdk/conf.h 00:02:24.666 TEST_HEADER include/spdk/config.h 00:02:24.666 CC app/vhost/vhost.o 00:02:24.666 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:24.666 TEST_HEADER include/spdk/cpuset.h 00:02:24.666 TEST_HEADER include/spdk/crc16.h 00:02:24.666 CC app/nvmf_tgt/nvmf_main.o 00:02:24.666 TEST_HEADER include/spdk/crc32.h 00:02:24.667 TEST_HEADER include/spdk/crc64.h 00:02:24.667 TEST_HEADER include/spdk/dif.h 00:02:24.667 TEST_HEADER include/spdk/dma.h 00:02:24.667 TEST_HEADER include/spdk/endian.h 00:02:24.667 TEST_HEADER include/spdk/env_dpdk.h 00:02:24.667 TEST_HEADER include/spdk/env.h 00:02:24.667 TEST_HEADER include/spdk/event.h 00:02:24.667 CC app/spdk_tgt/spdk_tgt.o 00:02:24.667 TEST_HEADER include/spdk/fd_group.h 00:02:24.667 TEST_HEADER include/spdk/fd.h 00:02:24.667 TEST_HEADER include/spdk/file.h 00:02:24.667 TEST_HEADER include/spdk/ftl.h 00:02:24.667 TEST_HEADER include/spdk/gpt_spec.h 00:02:24.667 TEST_HEADER include/spdk/hexlify.h 00:02:24.667 TEST_HEADER include/spdk/histogram_data.h 00:02:24.667 TEST_HEADER include/spdk/idxd.h 00:02:24.667 TEST_HEADER include/spdk/idxd_spec.h 00:02:24.667 TEST_HEADER include/spdk/init.h 00:02:24.667 CC examples/ioat/perf/perf.o 00:02:24.667 TEST_HEADER include/spdk/ioat.h 00:02:24.929 TEST_HEADER include/spdk/ioat_spec.h 00:02:24.929 CC examples/accel/perf/accel_perf.o 00:02:24.929 TEST_HEADER include/spdk/iscsi_spec.h 00:02:24.929 CC examples/util/zipf/zipf.o 00:02:24.929 CC examples/nvme/arbitration/arbitration.o 00:02:24.929 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:24.929 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:24.929 CC test/app/jsoncat/jsoncat.o 00:02:24.929 TEST_HEADER include/spdk/json.h 00:02:24.929 CC examples/nvme/abort/abort.o 00:02:24.929 TEST_HEADER include/spdk/jsonrpc.h 00:02:24.929 CC test/env/vtophys/vtophys.o 00:02:24.929 CC examples/vmd/lsvmd/lsvmd.o 00:02:24.929 CC app/fio/nvme/fio_plugin.o 00:02:24.929 CC test/env/pci/pci_ut.o 00:02:24.929 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:24.929 TEST_HEADER include/spdk/keyring.h 00:02:24.929 CC test/env/memory/memory_ut.o 00:02:24.929 CC examples/ioat/verify/verify.o 00:02:24.929 TEST_HEADER include/spdk/keyring_module.h 00:02:24.929 CC test/nvme/aer/aer.o 00:02:24.929 TEST_HEADER include/spdk/likely.h 00:02:24.929 CC examples/idxd/perf/perf.o 00:02:24.929 CC test/event/event_perf/event_perf.o 00:02:24.929 CC examples/nvme/hello_world/hello_world.o 00:02:24.929 CC examples/nvme/reconnect/reconnect.o 00:02:24.929 TEST_HEADER include/spdk/log.h 00:02:24.929 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:24.929 TEST_HEADER include/spdk/lvol.h 00:02:24.929 CC test/nvme/reserve/reserve.o 00:02:24.929 CC test/thread/poller_perf/poller_perf.o 00:02:24.929 CC test/app/histogram_perf/histogram_perf.o 00:02:24.929 TEST_HEADER include/spdk/memory.h 00:02:24.929 CC test/event/reactor_perf/reactor_perf.o 00:02:24.929 CC test/nvme/err_injection/err_injection.o 00:02:24.929 CC test/nvme/overhead/overhead.o 00:02:24.929 CC examples/vmd/led/led.o 00:02:24.929 CC test/nvme/e2edp/nvme_dp.o 00:02:24.929 TEST_HEADER include/spdk/mmio.h 00:02:24.929 CC examples/sock/hello_world/hello_sock.o 00:02:24.929 CC test/app/stub/stub.o 00:02:24.929 CC examples/nvme/hotplug/hotplug.o 00:02:24.929 TEST_HEADER include/spdk/nbd.h 00:02:24.929 CC test/event/reactor/reactor.o 00:02:24.929 TEST_HEADER include/spdk/notify.h 00:02:24.930 CC test/nvme/startup/startup.o 00:02:24.930 CC app/fio/bdev/fio_plugin.o 00:02:24.930 CC test/blobfs/mkfs/mkfs.o 00:02:24.930 TEST_HEADER include/spdk/nvme.h 00:02:24.930 CC test/nvme/sgl/sgl.o 00:02:24.930 TEST_HEADER include/spdk/nvme_intel.h 00:02:24.930 CC test/nvme/connect_stress/connect_stress.o 00:02:24.930 CC test/nvme/reset/reset.o 00:02:24.930 CC test/nvme/simple_copy/simple_copy.o 00:02:24.930 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:24.930 CC test/dma/test_dma/test_dma.o 00:02:24.930 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:24.930 TEST_HEADER include/spdk/nvme_spec.h 00:02:24.930 CC test/event/app_repeat/app_repeat.o 00:02:24.930 TEST_HEADER include/spdk/nvme_zns.h 00:02:24.930 CC test/nvme/compliance/nvme_compliance.o 00:02:24.930 CC test/nvme/boot_partition/boot_partition.o 00:02:24.930 CC test/bdev/bdevio/bdevio.o 00:02:24.930 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:24.930 LINK spdk_lspci 00:02:24.930 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:24.930 CC examples/bdev/bdevperf/bdevperf.o 00:02:24.930 TEST_HEADER include/spdk/nvmf.h 00:02:24.930 CC examples/blob/hello_world/hello_blob.o 00:02:24.930 CC test/accel/dif/dif.o 00:02:24.930 TEST_HEADER include/spdk/nvmf_spec.h 00:02:24.930 CC examples/bdev/hello_world/hello_bdev.o 00:02:24.930 CC examples/thread/thread/thread_ex.o 00:02:24.930 TEST_HEADER include/spdk/nvmf_transport.h 00:02:24.930 TEST_HEADER include/spdk/opal.h 00:02:24.930 TEST_HEADER include/spdk/opal_spec.h 00:02:24.930 TEST_HEADER include/spdk/pci_ids.h 00:02:24.930 CC test/app/bdev_svc/bdev_svc.o 00:02:24.930 CC examples/nvmf/nvmf/nvmf.o 00:02:24.930 CC examples/blob/cli/blobcli.o 00:02:24.930 TEST_HEADER include/spdk/pipe.h 00:02:24.930 CC test/event/scheduler/scheduler.o 00:02:24.930 TEST_HEADER include/spdk/queue.h 00:02:24.930 TEST_HEADER include/spdk/reduce.h 00:02:24.930 TEST_HEADER include/spdk/rpc.h 00:02:24.930 TEST_HEADER include/spdk/scheduler.h 00:02:24.930 TEST_HEADER include/spdk/scsi.h 00:02:24.930 TEST_HEADER include/spdk/scsi_spec.h 00:02:24.930 TEST_HEADER include/spdk/sock.h 00:02:24.930 TEST_HEADER include/spdk/stdinc.h 00:02:24.930 TEST_HEADER include/spdk/string.h 00:02:24.930 TEST_HEADER include/spdk/thread.h 00:02:24.930 TEST_HEADER include/spdk/trace.h 00:02:24.930 TEST_HEADER include/spdk/trace_parser.h 00:02:24.930 TEST_HEADER include/spdk/tree.h 00:02:24.930 LINK spdk_nvme_discover 00:02:24.930 TEST_HEADER include/spdk/ublk.h 00:02:24.930 TEST_HEADER include/spdk/util.h 00:02:24.930 CC test/env/mem_callbacks/mem_callbacks.o 00:02:24.930 TEST_HEADER include/spdk/uuid.h 00:02:24.930 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:24.930 TEST_HEADER include/spdk/version.h 00:02:24.930 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:24.930 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:24.930 CC test/lvol/esnap/esnap.o 00:02:24.930 TEST_HEADER include/spdk/vhost.h 00:02:24.930 TEST_HEADER include/spdk/vmd.h 00:02:24.930 LINK rpc_client_test 00:02:24.930 TEST_HEADER include/spdk/xor.h 00:02:24.930 TEST_HEADER include/spdk/zipf.h 00:02:24.930 LINK interrupt_tgt 00:02:24.930 CXX test/cpp_headers/accel.o 00:02:24.930 LINK nvmf_tgt 00:02:25.193 LINK jsoncat 00:02:25.193 LINK lsvmd 00:02:25.193 LINK spdk_trace_record 00:02:25.193 LINK zipf 00:02:25.193 LINK event_perf 00:02:25.193 LINK iscsi_tgt 00:02:25.193 LINK vhost 00:02:25.193 LINK env_dpdk_post_init 00:02:25.193 LINK histogram_perf 00:02:25.193 LINK reactor 00:02:25.193 LINK spdk_tgt 00:02:25.193 LINK ioat_perf 00:02:25.193 LINK reactor_perf 00:02:25.193 LINK vtophys 00:02:25.193 LINK led 00:02:25.193 LINK stub 00:02:25.193 LINK poller_perf 00:02:25.193 LINK verify 00:02:25.193 LINK cmb_copy 00:02:25.193 LINK pmr_persistence 00:02:25.193 LINK app_repeat 00:02:25.193 LINK boot_partition 00:02:25.193 LINK err_injection 00:02:25.193 LINK startup 00:02:25.193 LINK reserve 00:02:25.193 LINK connect_stress 00:02:25.193 LINK mkfs 00:02:25.193 LINK hello_world 00:02:25.193 LINK hello_sock 00:02:25.193 LINK hotplug 00:02:25.193 LINK spdk_dd 00:02:25.193 LINK bdev_svc 00:02:25.193 LINK hello_bdev 00:02:25.193 LINK scheduler 00:02:25.193 LINK thread 00:02:25.456 LINK spdk_trace 00:02:25.456 LINK overhead 00:02:25.456 LINK nvme_dp 00:02:25.456 LINK simple_copy 00:02:25.456 LINK sgl 00:02:25.456 CXX test/cpp_headers/accel_module.o 00:02:25.456 LINK hello_blob 00:02:25.456 LINK aer 00:02:25.456 LINK reset 00:02:25.456 CC test/nvme/fused_ordering/fused_ordering.o 00:02:25.456 LINK reconnect 00:02:25.456 LINK abort 00:02:25.456 LINK arbitration 00:02:25.456 LINK idxd_perf 00:02:25.456 LINK pci_ut 00:02:25.456 CXX test/cpp_headers/assert.o 00:02:25.456 CXX test/cpp_headers/barrier.o 00:02:25.456 LINK nvme_compliance 00:02:25.456 LINK nvmf 00:02:25.456 CXX test/cpp_headers/base64.o 00:02:25.456 CXX test/cpp_headers/bdev.o 00:02:25.456 CXX test/cpp_headers/bdev_module.o 00:02:25.456 LINK dif 00:02:25.456 CXX test/cpp_headers/bdev_zone.o 00:02:25.456 CXX test/cpp_headers/bit_array.o 00:02:25.457 CXX test/cpp_headers/bit_pool.o 00:02:25.457 CXX test/cpp_headers/blob_bdev.o 00:02:25.457 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.457 CXX test/cpp_headers/blobfs.o 00:02:25.457 CXX test/cpp_headers/blob.o 00:02:25.457 CXX test/cpp_headers/conf.o 00:02:25.457 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:25.457 LINK accel_perf 00:02:25.457 CC test/nvme/fdp/fdp.o 00:02:25.457 CXX test/cpp_headers/config.o 00:02:25.457 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:25.457 CXX test/cpp_headers/cpuset.o 00:02:25.457 CXX test/cpp_headers/crc16.o 00:02:25.457 CC test/nvme/cuse/cuse.o 00:02:25.457 LINK test_dma 00:02:25.457 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:25.457 LINK bdevio 00:02:25.457 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:25.457 LINK nvme_manage 00:02:25.723 CXX test/cpp_headers/crc32.o 00:02:25.724 CXX test/cpp_headers/crc64.o 00:02:25.724 CXX test/cpp_headers/dif.o 00:02:25.724 CXX test/cpp_headers/dma.o 00:02:25.724 CXX test/cpp_headers/env_dpdk.o 00:02:25.724 CXX test/cpp_headers/endian.o 00:02:25.724 CXX test/cpp_headers/env.o 00:02:25.724 CXX test/cpp_headers/event.o 00:02:25.724 CXX test/cpp_headers/fd_group.o 00:02:25.724 CXX test/cpp_headers/fd.o 00:02:25.724 CXX test/cpp_headers/file.o 00:02:25.724 LINK nvme_fuzz 00:02:25.724 CXX test/cpp_headers/ftl.o 00:02:25.724 CXX test/cpp_headers/gpt_spec.o 00:02:25.724 LINK spdk_nvme 00:02:25.724 CXX test/cpp_headers/hexlify.o 00:02:25.724 CXX test/cpp_headers/idxd_spec.o 00:02:25.724 CXX test/cpp_headers/histogram_data.o 00:02:25.724 CXX test/cpp_headers/idxd.o 00:02:25.724 CXX test/cpp_headers/init.o 00:02:25.724 CXX test/cpp_headers/ioat.o 00:02:25.724 CXX test/cpp_headers/ioat_spec.o 00:02:25.724 LINK spdk_bdev 00:02:25.724 CXX test/cpp_headers/iscsi_spec.o 00:02:25.724 LINK blobcli 00:02:25.724 CXX test/cpp_headers/json.o 00:02:25.724 CXX test/cpp_headers/jsonrpc.o 00:02:25.724 CXX test/cpp_headers/keyring.o 00:02:25.724 CXX test/cpp_headers/keyring_module.o 00:02:25.724 CXX test/cpp_headers/lvol.o 00:02:25.724 CXX test/cpp_headers/likely.o 00:02:25.724 CXX test/cpp_headers/log.o 00:02:25.724 CXX test/cpp_headers/memory.o 00:02:25.724 LINK fused_ordering 00:02:25.724 CXX test/cpp_headers/mmio.o 00:02:25.724 CXX test/cpp_headers/nbd.o 00:02:25.724 CXX test/cpp_headers/notify.o 00:02:25.724 CXX test/cpp_headers/nvme.o 00:02:25.724 CXX test/cpp_headers/nvme_intel.o 00:02:25.724 CXX test/cpp_headers/nvme_ocssd.o 00:02:25.724 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:25.724 CXX test/cpp_headers/nvme_spec.o 00:02:25.724 CXX test/cpp_headers/nvme_zns.o 00:02:25.724 CXX test/cpp_headers/nvmf_cmd.o 00:02:25.724 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:25.987 CXX test/cpp_headers/nvmf.o 00:02:25.987 LINK doorbell_aers 00:02:25.987 CXX test/cpp_headers/nvmf_spec.o 00:02:25.987 CXX test/cpp_headers/nvmf_transport.o 00:02:25.987 CXX test/cpp_headers/opal.o 00:02:25.987 CXX test/cpp_headers/opal_spec.o 00:02:25.987 CXX test/cpp_headers/pci_ids.o 00:02:25.987 CXX test/cpp_headers/pipe.o 00:02:25.987 CXX test/cpp_headers/queue.o 00:02:25.987 CXX test/cpp_headers/reduce.o 00:02:25.987 CXX test/cpp_headers/rpc.o 00:02:25.987 CXX test/cpp_headers/scheduler.o 00:02:25.987 CXX test/cpp_headers/scsi.o 00:02:25.987 CXX test/cpp_headers/scsi_spec.o 00:02:25.987 CXX test/cpp_headers/sock.o 00:02:25.987 CXX test/cpp_headers/stdinc.o 00:02:25.987 CXX test/cpp_headers/string.o 00:02:25.987 LINK mem_callbacks 00:02:25.987 CXX test/cpp_headers/thread.o 00:02:25.987 CXX test/cpp_headers/trace.o 00:02:25.987 CXX test/cpp_headers/trace_parser.o 00:02:25.987 CXX test/cpp_headers/tree.o 00:02:25.987 LINK spdk_top 00:02:25.987 LINK bdevperf 00:02:25.987 LINK fdp 00:02:25.987 LINK spdk_nvme_perf 00:02:25.987 CXX test/cpp_headers/ublk.o 00:02:25.987 CXX test/cpp_headers/util.o 00:02:25.987 CXX test/cpp_headers/version.o 00:02:25.987 CXX test/cpp_headers/uuid.o 00:02:25.987 LINK spdk_nvme_identify 00:02:25.987 CXX test/cpp_headers/vfio_user_pci.o 00:02:25.987 CXX test/cpp_headers/vfio_user_spec.o 00:02:25.987 CXX test/cpp_headers/vhost.o 00:02:25.987 CXX test/cpp_headers/xor.o 00:02:25.987 CXX test/cpp_headers/vmd.o 00:02:25.987 CXX test/cpp_headers/zipf.o 00:02:26.248 LINK memory_ut 00:02:26.248 LINK vhost_fuzz 00:02:26.817 LINK cuse 00:02:27.076 LINK iscsi_fuzz 00:02:28.982 LINK esnap 00:02:29.241 00:02:29.241 real 0m47.578s 00:02:29.241 user 6m47.425s 00:02:29.241 sys 3m12.141s 00:02:29.241 16:13:38 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:29.241 16:13:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.241 ************************************ 00:02:29.241 END TEST make 00:02:29.241 ************************************ 00:02:29.241 16:13:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:29.241 16:13:38 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:29.241 16:13:38 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:29.241 16:13:38 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.241 16:13:38 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:29.241 16:13:38 -- pm/common@45 -- $ pid=224414 00:02:29.241 16:13:38 -- pm/common@52 -- $ sudo kill -TERM 224414 00:02:29.241 16:13:38 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.241 16:13:38 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:29.241 16:13:38 -- pm/common@45 -- $ pid=224416 00:02:29.241 16:13:38 -- pm/common@52 -- $ sudo kill -TERM 224416 00:02:29.241 16:13:38 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.241 16:13:38 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:29.241 16:13:38 -- pm/common@45 -- $ pid=224411 00:02:29.241 16:13:38 -- pm/common@52 -- $ sudo kill -TERM 224411 00:02:29.499 16:13:38 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.499 16:13:38 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:29.499 16:13:38 -- pm/common@45 -- $ pid=224410 00:02:29.499 16:13:38 -- pm/common@52 -- $ sudo kill -TERM 224410 00:02:29.499 16:13:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:29.499 16:13:38 -- nvmf/common.sh@7 -- # uname -s 00:02:29.499 16:13:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:29.499 16:13:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:29.499 16:13:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:29.500 16:13:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:29.500 16:13:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:29.500 16:13:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:29.500 16:13:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:29.500 16:13:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:29.500 16:13:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:29.500 16:13:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:29.500 16:13:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:02:29.500 16:13:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:02:29.500 16:13:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:29.500 16:13:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:29.500 16:13:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:29.500 16:13:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:29.500 16:13:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:29.500 16:13:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:29.500 16:13:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:29.500 16:13:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:29.500 16:13:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.500 16:13:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.500 16:13:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.500 16:13:38 -- paths/export.sh@5 -- # export PATH 00:02:29.500 16:13:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:29.500 16:13:38 -- nvmf/common.sh@47 -- # : 0 00:02:29.500 16:13:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:29.500 16:13:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:29.500 16:13:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:29.500 16:13:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:29.500 16:13:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:29.500 16:13:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:29.500 16:13:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:29.500 16:13:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:29.500 16:13:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:29.500 16:13:38 -- spdk/autotest.sh@32 -- # uname -s 00:02:29.500 16:13:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:29.500 16:13:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:29.500 16:13:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:29.500 16:13:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:29.500 16:13:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:29.500 16:13:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:29.500 16:13:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:29.500 16:13:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:29.500 16:13:38 -- spdk/autotest.sh@48 -- # udevadm_pid=280936 00:02:29.500 16:13:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:29.500 16:13:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:29.500 16:13:38 -- pm/common@17 -- # local monitor 00:02:29.500 16:13:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.500 16:13:38 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=280938 00:02:29.500 16:13:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.500 16:13:38 -- pm/common@21 -- # date +%s 00:02:29.500 16:13:38 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=280940 00:02:29.500 16:13:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.500 16:13:38 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=280944 00:02:29.500 16:13:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:29.500 16:13:38 -- pm/common@21 -- # date +%s 00:02:29.500 16:13:38 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=280947 00:02:29.500 16:13:38 -- pm/common@21 -- # date +%s 00:02:29.500 16:13:38 -- pm/common@26 -- # sleep 1 00:02:29.500 16:13:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714140818 00:02:29.500 16:13:38 -- pm/common@21 -- # date +%s 00:02:29.500 16:13:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714140818 00:02:29.500 16:13:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714140818 00:02:29.500 16:13:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714140818 00:02:29.759 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714140818_collect-vmstat.pm.log 00:02:29.759 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714140818_collect-bmc-pm.bmc.pm.log 00:02:29.759 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714140818_collect-cpu-load.pm.log 00:02:29.759 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714140818_collect-cpu-temp.pm.log 00:02:30.697 16:13:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:30.697 16:13:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:30.697 16:13:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:30.697 16:13:39 -- common/autotest_common.sh@10 -- # set +x 00:02:30.697 16:13:39 -- spdk/autotest.sh@59 -- # create_test_list 00:02:30.697 16:13:39 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:30.697 16:13:39 -- common/autotest_common.sh@10 -- # set +x 00:02:30.697 16:13:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:30.697 16:13:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:30.697 16:13:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:30.697 16:13:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:30.697 16:13:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:30.697 16:13:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:30.697 16:13:39 -- common/autotest_common.sh@1441 -- # uname 00:02:30.697 16:13:39 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:30.697 16:13:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:30.697 16:13:39 -- common/autotest_common.sh@1461 -- # uname 00:02:30.697 16:13:39 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:30.697 16:13:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:30.697 16:13:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:30.697 16:13:39 -- spdk/autotest.sh@72 -- # hash lcov 00:02:30.697 16:13:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:30.697 16:13:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:30.697 --rc lcov_branch_coverage=1 00:02:30.697 --rc lcov_function_coverage=1 00:02:30.697 --rc genhtml_branch_coverage=1 00:02:30.697 --rc genhtml_function_coverage=1 00:02:30.697 --rc genhtml_legend=1 00:02:30.697 --rc geninfo_all_blocks=1 00:02:30.697 ' 00:02:30.697 16:13:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:30.697 --rc lcov_branch_coverage=1 00:02:30.697 --rc lcov_function_coverage=1 00:02:30.697 --rc genhtml_branch_coverage=1 00:02:30.697 --rc genhtml_function_coverage=1 00:02:30.697 --rc genhtml_legend=1 00:02:30.697 --rc geninfo_all_blocks=1 00:02:30.697 ' 00:02:30.697 16:13:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:30.697 --rc lcov_branch_coverage=1 00:02:30.697 --rc lcov_function_coverage=1 00:02:30.697 --rc genhtml_branch_coverage=1 00:02:30.697 --rc genhtml_function_coverage=1 00:02:30.697 --rc genhtml_legend=1 00:02:30.697 --rc geninfo_all_blocks=1 00:02:30.697 --no-external' 00:02:30.697 16:13:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:30.697 --rc lcov_branch_coverage=1 00:02:30.697 --rc lcov_function_coverage=1 00:02:30.697 --rc genhtml_branch_coverage=1 00:02:30.697 --rc genhtml_function_coverage=1 00:02:30.697 --rc genhtml_legend=1 00:02:30.697 --rc geninfo_all_blocks=1 00:02:30.697 --no-external' 00:02:30.697 16:13:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:30.697 lcov: LCOV version 1.14 00:02:30.697 16:13:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:37.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:37.287 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:37.288 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:37.288 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:37.547 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:37.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:37.548 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:37.548 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:37.548 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:37.548 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:37.548 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:37.548 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:37.548 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:37.548 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:37.548 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:37.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:37.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:37.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:37.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:37.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:37.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:37.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:37.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:37.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:37.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:37.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:37.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:37.806 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:37.807 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:41.101 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:41.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:49.226 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:49.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:49.226 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:49.226 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:49.226 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:49.227 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:54.508 16:14:03 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:54.508 16:14:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:54.508 16:14:03 -- common/autotest_common.sh@10 -- # set +x 00:02:54.508 16:14:03 -- spdk/autotest.sh@91 -- # rm -f 00:02:54.766 16:14:03 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:58.061 0000:5e:00.0 (144d a80a): Already using the nvme driver 00:02:58.319 0000:af:00.0 (8086 2701): Already using the nvme driver 00:02:58.319 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:58.319 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:58.319 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:58.319 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:58.319 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:58.319 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:58.319 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:58.319 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:58.578 0000:b0:00.0 (8086 2701): Already using the nvme driver 00:02:58.578 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:58.578 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:58.578 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:58.578 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:58.578 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:58.578 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:58.578 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:58.578 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:58.838 16:14:07 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:58.838 16:14:07 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:58.838 16:14:07 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:58.838 16:14:07 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:58.838 16:14:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:58.838 16:14:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:58.838 16:14:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:58.838 16:14:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:58.838 16:14:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:58.838 16:14:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:58.838 16:14:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:02:58.838 16:14:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:02:58.838 16:14:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:58.838 16:14:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:58.838 16:14:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:58.838 16:14:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:02:58.838 16:14:07 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:02:58.838 16:14:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:58.838 16:14:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:58.838 16:14:07 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:58.838 16:14:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:58.838 16:14:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:58.838 16:14:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:58.838 16:14:07 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:58.838 16:14:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:58.838 No valid GPT data, bailing 00:02:58.838 16:14:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:58.838 16:14:07 -- scripts/common.sh@391 -- # pt= 00:02:58.838 16:14:07 -- scripts/common.sh@392 -- # return 1 00:02:58.838 16:14:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:58.838 1+0 records in 00:02:58.838 1+0 records out 00:02:58.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0049482 s, 212 MB/s 00:02:58.838 16:14:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:58.838 16:14:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:58.838 16:14:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:58.838 16:14:07 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:58.838 16:14:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:58.838 No valid GPT data, bailing 00:02:58.838 16:14:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:58.838 16:14:07 -- scripts/common.sh@391 -- # pt= 00:02:58.838 16:14:07 -- scripts/common.sh@392 -- # return 1 00:02:58.838 16:14:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:58.838 1+0 records in 00:02:58.838 1+0 records out 00:02:58.838 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00401656 s, 261 MB/s 00:02:58.838 16:14:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:58.838 16:14:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:58.838 16:14:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:02:58.839 16:14:07 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:02:58.839 16:14:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:58.839 No valid GPT data, bailing 00:02:58.839 16:14:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:58.839 16:14:07 -- scripts/common.sh@391 -- # pt= 00:02:58.839 16:14:07 -- scripts/common.sh@392 -- # return 1 00:02:58.839 16:14:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:58.839 1+0 records in 00:02:58.839 1+0 records out 00:02:58.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426355 s, 246 MB/s 00:02:58.839 16:14:07 -- spdk/autotest.sh@118 -- # sync 00:02:58.839 16:14:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:58.839 16:14:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:58.839 16:14:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:04.118 16:14:12 -- spdk/autotest.sh@124 -- # uname -s 00:03:04.118 16:14:12 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:04.118 16:14:12 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.118 16:14:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.118 16:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.118 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:03:04.118 ************************************ 00:03:04.118 START TEST setup.sh 00:03:04.118 ************************************ 00:03:04.118 16:14:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:04.118 * Looking for test storage... 00:03:04.118 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:04.118 16:14:13 -- setup/test-setup.sh@10 -- # uname -s 00:03:04.118 16:14:13 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:04.118 16:14:13 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:04.119 16:14:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.119 16:14:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.119 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:03:04.378 ************************************ 00:03:04.378 START TEST acl 00:03:04.378 ************************************ 00:03:04.378 16:14:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:04.378 * Looking for test storage... 00:03:04.378 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:04.378 16:14:13 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:04.378 16:14:13 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:04.378 16:14:13 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:04.378 16:14:13 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:04.378 16:14:13 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:04.378 16:14:13 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:04.378 16:14:13 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:04.378 16:14:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.378 16:14:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:04.378 16:14:13 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:04.378 16:14:13 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:04.378 16:14:13 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:04.378 16:14:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:04.378 16:14:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:04.378 16:14:13 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:04.378 16:14:13 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:04.378 16:14:13 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:04.378 16:14:13 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:04.378 16:14:13 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:04.378 16:14:13 -- setup/acl.sh@12 -- # devs=() 00:03:04.378 16:14:13 -- setup/acl.sh@12 -- # declare -a devs 00:03:04.378 16:14:13 -- setup/acl.sh@13 -- # drivers=() 00:03:04.378 16:14:13 -- setup/acl.sh@13 -- # declare -A drivers 00:03:04.378 16:14:13 -- setup/acl.sh@51 -- # setup reset 00:03:04.378 16:14:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.378 16:14:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.574 16:14:17 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:08.574 16:14:17 -- setup/acl.sh@16 -- # local dev driver 00:03:08.574 16:14:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:08.574 16:14:17 -- setup/acl.sh@15 -- # setup output status 00:03:08.574 16:14:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.574 16:14:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:11.869 Hugepages 00:03:11.869 node hugesize free / total 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 00:03:11.869 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:11.869 16:14:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # continue 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:af:00.0 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:11.869 16:14:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@19 -- # [[ 0000:b0:00.0 == *:*:*.* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:11.869 16:14:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:03:11.869 16:14:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:11.869 16:14:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:11.869 16:14:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:11.869 16:14:20 -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:03:11.869 16:14:20 -- setup/acl.sh@54 -- # run_test denied denied 00:03:11.869 16:14:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:11.869 16:14:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:11.869 16:14:20 -- common/autotest_common.sh@10 -- # set +x 00:03:11.870 ************************************ 00:03:11.870 START TEST denied 00:03:11.870 ************************************ 00:03:11.870 16:14:20 -- common/autotest_common.sh@1111 -- # denied 00:03:11.870 16:14:20 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:11.870 16:14:20 -- setup/acl.sh@38 -- # setup output config 00:03:11.870 16:14:20 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:11.870 16:14:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.870 16:14:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:15.184 0000:5e:00.0 (144d a80a): Skipping denied controller at 0000:5e:00.0 00:03:15.184 16:14:23 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:15.184 16:14:23 -- setup/acl.sh@28 -- # local dev driver 00:03:15.184 16:14:23 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:15.184 16:14:23 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:15.184 16:14:23 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:15.184 16:14:23 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:15.184 16:14:23 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:15.184 16:14:23 -- setup/acl.sh@41 -- # setup reset 00:03:15.184 16:14:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.184 16:14:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.375 00:03:19.375 real 0m7.488s 00:03:19.375 user 0m2.088s 00:03:19.375 sys 0m4.395s 00:03:19.375 16:14:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:19.375 16:14:28 -- common/autotest_common.sh@10 -- # set +x 00:03:19.375 ************************************ 00:03:19.375 END TEST denied 00:03:19.375 ************************************ 00:03:19.375 16:14:28 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:19.375 16:14:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:19.375 16:14:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:19.376 16:14:28 -- common/autotest_common.sh@10 -- # set +x 00:03:19.635 ************************************ 00:03:19.635 START TEST allowed 00:03:19.635 ************************************ 00:03:19.635 16:14:28 -- common/autotest_common.sh@1111 -- # allowed 00:03:19.635 16:14:28 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:19.635 16:14:28 -- setup/acl.sh@45 -- # setup output config 00:03:19.635 16:14:28 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:19.635 16:14:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.635 16:14:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:24.914 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:24.914 16:14:33 -- setup/acl.sh@47 -- # verify 0000:af:00.0 0000:b0:00.0 00:03:24.914 16:14:33 -- setup/acl.sh@28 -- # local dev driver 00:03:24.914 16:14:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:24.914 16:14:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:af:00.0 ]] 00:03:24.914 16:14:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/driver 00:03:24.914 16:14:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:24.914 16:14:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:24.914 16:14:33 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:24.914 16:14:33 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:b0:00.0 ]] 00:03:24.914 16:14:33 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:b0:00.0/driver 00:03:24.914 16:14:33 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:24.914 16:14:33 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:24.914 16:14:33 -- setup/acl.sh@48 -- # setup reset 00:03:24.914 16:14:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:24.914 16:14:33 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.113 00:03:29.113 real 0m9.307s 00:03:29.113 user 0m2.659s 00:03:29.113 sys 0m5.002s 00:03:29.113 16:14:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.113 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:03:29.113 ************************************ 00:03:29.113 END TEST allowed 00:03:29.113 ************************************ 00:03:29.113 00:03:29.113 real 0m24.723s 00:03:29.113 user 0m7.610s 00:03:29.113 sys 0m14.629s 00:03:29.113 16:14:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:29.113 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:03:29.113 ************************************ 00:03:29.113 END TEST acl 00:03:29.113 ************************************ 00:03:29.113 16:14:37 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:29.113 16:14:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.113 16:14:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.113 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:03:29.113 ************************************ 00:03:29.113 START TEST hugepages 00:03:29.113 ************************************ 00:03:29.113 16:14:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:03:29.376 * Looking for test storage... 00:03:29.376 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:29.376 16:14:38 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:29.376 16:14:38 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:29.376 16:14:38 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:29.376 16:14:38 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:29.376 16:14:38 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:29.376 16:14:38 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:29.376 16:14:38 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:29.376 16:14:38 -- setup/common.sh@18 -- # local node= 00:03:29.376 16:14:38 -- setup/common.sh@19 -- # local var val 00:03:29.376 16:14:38 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.376 16:14:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.376 16:14:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.376 16:14:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.376 16:14:38 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.376 16:14:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 47614440 kB' 'MemAvailable: 48032412 kB' 'Buffers: 1064 kB' 'Cached: 11189144 kB' 'SwapCached: 0 kB' 'Active: 11417352 kB' 'Inactive: 264060 kB' 'Active(anon): 10843732 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495080 kB' 'Mapped: 148820 kB' 'Shmem: 10352528 kB' 'KReclaimable: 185868 kB' 'Slab: 539736 kB' 'SReclaimable: 185868 kB' 'SUnreclaim: 353868 kB' 'KernelStack: 16016 kB' 'PageTables: 7304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39019720 kB' 'Committed_AS: 12314872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199796 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.376 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.376 16:14:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # continue 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.377 16:14:38 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.377 16:14:38 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.377 16:14:38 -- setup/common.sh@33 -- # echo 2048 00:03:29.377 16:14:38 -- setup/common.sh@33 -- # return 0 00:03:29.377 16:14:38 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:29.377 16:14:38 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:29.377 16:14:38 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:29.377 16:14:38 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:29.377 16:14:38 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:29.377 16:14:38 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:29.377 16:14:38 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:29.377 16:14:38 -- setup/hugepages.sh@207 -- # get_nodes 00:03:29.377 16:14:38 -- setup/hugepages.sh@27 -- # local node 00:03:29.377 16:14:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.377 16:14:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:29.377 16:14:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.377 16:14:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.377 16:14:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.377 16:14:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.377 16:14:38 -- setup/hugepages.sh@208 -- # clear_hp 00:03:29.377 16:14:38 -- setup/hugepages.sh@37 -- # local node hp 00:03:29.377 16:14:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.377 16:14:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.377 16:14:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.377 16:14:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.377 16:14:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.377 16:14:38 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.377 16:14:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.377 16:14:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.377 16:14:38 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.377 16:14:38 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.377 16:14:38 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.377 16:14:38 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.377 16:14:38 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:29.377 16:14:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:29.377 16:14:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:29.377 16:14:38 -- common/autotest_common.sh@10 -- # set +x 00:03:29.637 ************************************ 00:03:29.637 START TEST default_setup 00:03:29.637 ************************************ 00:03:29.637 16:14:38 -- common/autotest_common.sh@1111 -- # default_setup 00:03:29.637 16:14:38 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:29.637 16:14:38 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.637 16:14:38 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.637 16:14:38 -- setup/hugepages.sh@51 -- # shift 00:03:29.637 16:14:38 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.637 16:14:38 -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.637 16:14:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.637 16:14:38 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.637 16:14:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.637 16:14:38 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.638 16:14:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.638 16:14:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.638 16:14:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.638 16:14:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.638 16:14:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.638 16:14:38 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.638 16:14:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.638 16:14:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:29.638 16:14:38 -- setup/hugepages.sh@73 -- # return 0 00:03:29.638 16:14:38 -- setup/hugepages.sh@137 -- # setup output 00:03:29.638 16:14:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.638 16:14:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:32.938 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:03:32.938 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:32.938 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:03:32.938 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.938 16:14:41 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:32.938 16:14:41 -- setup/hugepages.sh@89 -- # local node 00:03:32.938 16:14:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.938 16:14:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.938 16:14:41 -- setup/hugepages.sh@92 -- # local surp 00:03:32.938 16:14:41 -- setup/hugepages.sh@93 -- # local resv 00:03:32.938 16:14:41 -- setup/hugepages.sh@94 -- # local anon 00:03:32.938 16:14:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.938 16:14:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.938 16:14:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.938 16:14:41 -- setup/common.sh@18 -- # local node= 00:03:32.938 16:14:41 -- setup/common.sh@19 -- # local var val 00:03:32.938 16:14:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.938 16:14:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.938 16:14:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.938 16:14:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.938 16:14:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.938 16:14:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49793716 kB' 'MemAvailable: 50211104 kB' 'Buffers: 1064 kB' 'Cached: 11189236 kB' 'SwapCached: 0 kB' 'Active: 11434376 kB' 'Inactive: 264060 kB' 'Active(anon): 10860756 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511580 kB' 'Mapped: 148820 kB' 'Shmem: 10352620 kB' 'KReclaimable: 184700 kB' 'Slab: 536268 kB' 'SReclaimable: 184700 kB' 'SUnreclaim: 351568 kB' 'KernelStack: 16144 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12333828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199812 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.938 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.938 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.939 16:14:41 -- setup/common.sh@33 -- # echo 0 00:03:32.939 16:14:41 -- setup/common.sh@33 -- # return 0 00:03:32.939 16:14:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:32.939 16:14:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.939 16:14:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.939 16:14:41 -- setup/common.sh@18 -- # local node= 00:03:32.939 16:14:41 -- setup/common.sh@19 -- # local var val 00:03:32.939 16:14:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.939 16:14:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.939 16:14:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.939 16:14:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.939 16:14:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.939 16:14:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49793960 kB' 'MemAvailable: 50211316 kB' 'Buffers: 1064 kB' 'Cached: 11189244 kB' 'SwapCached: 0 kB' 'Active: 11434796 kB' 'Inactive: 264060 kB' 'Active(anon): 10861176 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512120 kB' 'Mapped: 148768 kB' 'Shmem: 10352628 kB' 'KReclaimable: 184636 kB' 'Slab: 536200 kB' 'SReclaimable: 184636 kB' 'SUnreclaim: 351564 kB' 'KernelStack: 16160 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12333840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199796 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.939 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.939 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.940 16:14:41 -- setup/common.sh@33 -- # echo 0 00:03:32.940 16:14:41 -- setup/common.sh@33 -- # return 0 00:03:32.940 16:14:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:32.940 16:14:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.940 16:14:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.940 16:14:41 -- setup/common.sh@18 -- # local node= 00:03:32.940 16:14:41 -- setup/common.sh@19 -- # local var val 00:03:32.940 16:14:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.940 16:14:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.940 16:14:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.940 16:14:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.940 16:14:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.940 16:14:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49794704 kB' 'MemAvailable: 50212060 kB' 'Buffers: 1064 kB' 'Cached: 11189256 kB' 'SwapCached: 0 kB' 'Active: 11434720 kB' 'Inactive: 264060 kB' 'Active(anon): 10861100 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511992 kB' 'Mapped: 148768 kB' 'Shmem: 10352640 kB' 'KReclaimable: 184636 kB' 'Slab: 536200 kB' 'SReclaimable: 184636 kB' 'SUnreclaim: 351564 kB' 'KernelStack: 16160 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12333852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199796 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.940 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.940 16:14:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.941 16:14:41 -- setup/common.sh@33 -- # echo 0 00:03:32.941 16:14:41 -- setup/common.sh@33 -- # return 0 00:03:32.941 16:14:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:32.941 16:14:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.941 nr_hugepages=1024 00:03:32.941 16:14:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.941 resv_hugepages=0 00:03:32.941 16:14:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.941 surplus_hugepages=0 00:03:32.941 16:14:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.941 anon_hugepages=0 00:03:32.941 16:14:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.941 16:14:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.941 16:14:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.941 16:14:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.941 16:14:41 -- setup/common.sh@18 -- # local node= 00:03:32.941 16:14:41 -- setup/common.sh@19 -- # local var val 00:03:32.941 16:14:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.941 16:14:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.941 16:14:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.941 16:14:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.941 16:14:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.941 16:14:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49796476 kB' 'MemAvailable: 50213832 kB' 'Buffers: 1064 kB' 'Cached: 11189284 kB' 'SwapCached: 0 kB' 'Active: 11434444 kB' 'Inactive: 264060 kB' 'Active(anon): 10860824 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511660 kB' 'Mapped: 148768 kB' 'Shmem: 10352668 kB' 'KReclaimable: 184636 kB' 'Slab: 536192 kB' 'SReclaimable: 184636 kB' 'SUnreclaim: 351556 kB' 'KernelStack: 16160 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12333868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199796 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.941 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.941 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.942 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.942 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.942 16:14:41 -- setup/common.sh@33 -- # echo 1024 00:03:32.942 16:14:41 -- setup/common.sh@33 -- # return 0 00:03:32.942 16:14:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.942 16:14:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.942 16:14:41 -- setup/hugepages.sh@27 -- # local node 00:03:32.942 16:14:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.942 16:14:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.942 16:14:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.942 16:14:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:32.943 16:14:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.943 16:14:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.943 16:14:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.943 16:14:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.943 16:14:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.943 16:14:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.943 16:14:41 -- setup/common.sh@18 -- # local node=0 00:03:32.943 16:14:41 -- setup/common.sh@19 -- # local var val 00:03:32.943 16:14:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:32.943 16:14:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.943 16:14:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.943 16:14:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.943 16:14:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.943 16:14:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634076 kB' 'MemFree: 22000564 kB' 'MemUsed: 10633512 kB' 'SwapCached: 0 kB' 'Active: 7261996 kB' 'Inactive: 65060 kB' 'Active(anon): 6883772 kB' 'Inactive(anon): 0 kB' 'Active(file): 378224 kB' 'Inactive(file): 65060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7183916 kB' 'Mapped: 72792 kB' 'AnonPages: 146424 kB' 'Shmem: 6740632 kB' 'KernelStack: 8456 kB' 'PageTables: 2712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87340 kB' 'Slab: 289616 kB' 'SReclaimable: 87340 kB' 'SUnreclaim: 202276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # continue 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:32.943 16:14:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:32.943 16:14:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.943 16:14:41 -- setup/common.sh@33 -- # echo 0 00:03:32.943 16:14:41 -- setup/common.sh@33 -- # return 0 00:03:32.943 16:14:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.943 16:14:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.943 16:14:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.943 16:14:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.943 16:14:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.943 node0=1024 expecting 1024 00:03:32.943 16:14:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.943 00:03:32.943 real 0m3.309s 00:03:32.943 user 0m1.178s 00:03:32.943 sys 0m2.105s 00:03:32.943 16:14:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:32.943 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:03:32.943 ************************************ 00:03:32.943 END TEST default_setup 00:03:32.943 ************************************ 00:03:32.943 16:14:41 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:32.943 16:14:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:32.943 16:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:32.943 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:03:32.943 ************************************ 00:03:32.943 START TEST per_node_1G_alloc 00:03:32.943 ************************************ 00:03:32.943 16:14:41 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:32.943 16:14:41 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:32.943 16:14:41 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:32.943 16:14:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.943 16:14:41 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:32.943 16:14:41 -- setup/hugepages.sh@51 -- # shift 00:03:32.943 16:14:41 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:32.943 16:14:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.943 16:14:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.944 16:14:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.944 16:14:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:32.944 16:14:41 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:32.944 16:14:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.944 16:14:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.944 16:14:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.944 16:14:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.944 16:14:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.944 16:14:41 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:32.944 16:14:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.944 16:14:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:32.944 16:14:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.944 16:14:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:32.944 16:14:41 -- setup/hugepages.sh@73 -- # return 0 00:03:32.944 16:14:41 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:32.944 16:14:41 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:32.944 16:14:41 -- setup/hugepages.sh@146 -- # setup output 00:03:32.944 16:14:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.944 16:14:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:36.251 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:36.251 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:36.251 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:36.251 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:36.251 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:36.251 16:14:45 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:36.251 16:14:45 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:36.251 16:14:45 -- setup/hugepages.sh@89 -- # local node 00:03:36.251 16:14:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:36.251 16:14:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:36.251 16:14:45 -- setup/hugepages.sh@92 -- # local surp 00:03:36.251 16:14:45 -- setup/hugepages.sh@93 -- # local resv 00:03:36.251 16:14:45 -- setup/hugepages.sh@94 -- # local anon 00:03:36.251 16:14:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:36.251 16:14:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:36.251 16:14:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:36.251 16:14:45 -- setup/common.sh@18 -- # local node= 00:03:36.251 16:14:45 -- setup/common.sh@19 -- # local var val 00:03:36.251 16:14:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.251 16:14:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.251 16:14:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.251 16:14:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.251 16:14:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.251 16:14:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.251 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.251 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.251 16:14:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49801156 kB' 'MemAvailable: 50218512 kB' 'Buffers: 1064 kB' 'Cached: 11189348 kB' 'SwapCached: 0 kB' 'Active: 11435164 kB' 'Inactive: 264060 kB' 'Active(anon): 10861544 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511704 kB' 'Mapped: 147512 kB' 'Shmem: 10352732 kB' 'KReclaimable: 184636 kB' 'Slab: 536796 kB' 'SReclaimable: 184636 kB' 'SUnreclaim: 352160 kB' 'KernelStack: 16096 kB' 'PageTables: 7476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12318420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199828 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:36.251 16:14:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.251 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.251 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.251 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.251 16:14:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.251 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.251 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.251 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.251 16:14:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.251 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.516 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.516 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:36.517 16:14:45 -- setup/common.sh@33 -- # echo 0 00:03:36.517 16:14:45 -- setup/common.sh@33 -- # return 0 00:03:36.517 16:14:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:36.517 16:14:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:36.517 16:14:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.517 16:14:45 -- setup/common.sh@18 -- # local node= 00:03:36.517 16:14:45 -- setup/common.sh@19 -- # local var val 00:03:36.517 16:14:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.517 16:14:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.517 16:14:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.517 16:14:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.517 16:14:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.517 16:14:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49800488 kB' 'MemAvailable: 50217844 kB' 'Buffers: 1064 kB' 'Cached: 11189352 kB' 'SwapCached: 0 kB' 'Active: 11434888 kB' 'Inactive: 264060 kB' 'Active(anon): 10861268 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511792 kB' 'Mapped: 147512 kB' 'Shmem: 10352736 kB' 'KReclaimable: 184636 kB' 'Slab: 536832 kB' 'SReclaimable: 184636 kB' 'SUnreclaim: 352196 kB' 'KernelStack: 16096 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12318432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199796 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.517 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.517 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.518 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.518 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.519 16:14:45 -- setup/common.sh@33 -- # echo 0 00:03:36.519 16:14:45 -- setup/common.sh@33 -- # return 0 00:03:36.519 16:14:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:36.519 16:14:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:36.519 16:14:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:36.519 16:14:45 -- setup/common.sh@18 -- # local node= 00:03:36.519 16:14:45 -- setup/common.sh@19 -- # local var val 00:03:36.519 16:14:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.519 16:14:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.519 16:14:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.519 16:14:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.519 16:14:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.519 16:14:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49800960 kB' 'MemAvailable: 50218316 kB' 'Buffers: 1064 kB' 'Cached: 11189364 kB' 'SwapCached: 0 kB' 'Active: 11434848 kB' 'Inactive: 264060 kB' 'Active(anon): 10861228 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511788 kB' 'Mapped: 147512 kB' 'Shmem: 10352748 kB' 'KReclaimable: 184636 kB' 'Slab: 536832 kB' 'SReclaimable: 184636 kB' 'SUnreclaim: 352196 kB' 'KernelStack: 16096 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12318448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199796 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.519 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.519 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:36.520 16:14:45 -- setup/common.sh@33 -- # echo 0 00:03:36.520 16:14:45 -- setup/common.sh@33 -- # return 0 00:03:36.520 16:14:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:36.520 16:14:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:36.520 nr_hugepages=1024 00:03:36.520 16:14:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:36.520 resv_hugepages=0 00:03:36.520 16:14:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:36.520 surplus_hugepages=0 00:03:36.520 16:14:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:36.520 anon_hugepages=0 00:03:36.520 16:14:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.520 16:14:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:36.520 16:14:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:36.520 16:14:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:36.520 16:14:45 -- setup/common.sh@18 -- # local node= 00:03:36.520 16:14:45 -- setup/common.sh@19 -- # local var val 00:03:36.520 16:14:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.520 16:14:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.520 16:14:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:36.520 16:14:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:36.520 16:14:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.520 16:14:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49800960 kB' 'MemAvailable: 50218316 kB' 'Buffers: 1064 kB' 'Cached: 11189376 kB' 'SwapCached: 0 kB' 'Active: 11434912 kB' 'Inactive: 264060 kB' 'Active(anon): 10861292 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511788 kB' 'Mapped: 147512 kB' 'Shmem: 10352760 kB' 'KReclaimable: 184636 kB' 'Slab: 536832 kB' 'SReclaimable: 184636 kB' 'SUnreclaim: 352196 kB' 'KernelStack: 16096 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12318464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199796 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.520 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.520 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.521 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.521 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:36.522 16:14:45 -- setup/common.sh@33 -- # echo 1024 00:03:36.522 16:14:45 -- setup/common.sh@33 -- # return 0 00:03:36.522 16:14:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:36.522 16:14:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:36.522 16:14:45 -- setup/hugepages.sh@27 -- # local node 00:03:36.522 16:14:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.522 16:14:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.522 16:14:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:36.522 16:14:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:36.522 16:14:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:36.522 16:14:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:36.522 16:14:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.522 16:14:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.522 16:14:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:36.522 16:14:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.522 16:14:45 -- setup/common.sh@18 -- # local node=0 00:03:36.522 16:14:45 -- setup/common.sh@19 -- # local var val 00:03:36.522 16:14:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.522 16:14:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.522 16:14:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:36.522 16:14:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:36.522 16:14:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.522 16:14:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634076 kB' 'MemFree: 23059988 kB' 'MemUsed: 9574088 kB' 'SwapCached: 0 kB' 'Active: 7265396 kB' 'Inactive: 65060 kB' 'Active(anon): 6887172 kB' 'Inactive(anon): 0 kB' 'Active(file): 378224 kB' 'Inactive(file): 65060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7183948 kB' 'Mapped: 71536 kB' 'AnonPages: 149676 kB' 'Shmem: 6740664 kB' 'KernelStack: 8472 kB' 'PageTables: 2724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87340 kB' 'Slab: 290428 kB' 'SReclaimable: 87340 kB' 'SUnreclaim: 203088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.522 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.522 16:14:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@33 -- # echo 0 00:03:36.523 16:14:45 -- setup/common.sh@33 -- # return 0 00:03:36.523 16:14:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.523 16:14:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:36.523 16:14:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:36.523 16:14:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:36.523 16:14:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:36.523 16:14:45 -- setup/common.sh@18 -- # local node=1 00:03:36.523 16:14:45 -- setup/common.sh@19 -- # local var val 00:03:36.523 16:14:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:36.523 16:14:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:36.523 16:14:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:36.523 16:14:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:36.523 16:14:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:36.523 16:14:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32822464 kB' 'MemFree: 26740972 kB' 'MemUsed: 6081492 kB' 'SwapCached: 0 kB' 'Active: 4169564 kB' 'Inactive: 199000 kB' 'Active(anon): 3974168 kB' 'Inactive(anon): 0 kB' 'Active(file): 195396 kB' 'Inactive(file): 199000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4006520 kB' 'Mapped: 75976 kB' 'AnonPages: 362128 kB' 'Shmem: 3612124 kB' 'KernelStack: 7624 kB' 'PageTables: 4756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97296 kB' 'Slab: 246404 kB' 'SReclaimable: 97296 kB' 'SUnreclaim: 149108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.523 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.523 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # continue 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:36.524 16:14:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:36.524 16:14:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:36.524 16:14:45 -- setup/common.sh@33 -- # echo 0 00:03:36.524 16:14:45 -- setup/common.sh@33 -- # return 0 00:03:36.524 16:14:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:36.524 16:14:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.524 16:14:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.524 16:14:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.524 16:14:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:36.524 node0=512 expecting 512 00:03:36.524 16:14:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:36.524 16:14:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:36.524 16:14:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:36.524 16:14:45 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:36.524 node1=512 expecting 512 00:03:36.524 16:14:45 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:36.524 00:03:36.524 real 0m3.543s 00:03:36.524 user 0m1.338s 00:03:36.524 sys 0m2.225s 00:03:36.524 16:14:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:36.524 16:14:45 -- common/autotest_common.sh@10 -- # set +x 00:03:36.524 ************************************ 00:03:36.524 END TEST per_node_1G_alloc 00:03:36.524 ************************************ 00:03:36.524 16:14:45 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:36.524 16:14:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:36.524 16:14:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:36.524 16:14:45 -- common/autotest_common.sh@10 -- # set +x 00:03:36.784 ************************************ 00:03:36.784 START TEST even_2G_alloc 00:03:36.784 ************************************ 00:03:36.784 16:14:45 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:36.784 16:14:45 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:36.784 16:14:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:36.784 16:14:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:36.784 16:14:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:36.784 16:14:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:36.784 16:14:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:36.784 16:14:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:36.784 16:14:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:36.784 16:14:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:36.784 16:14:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:36.784 16:14:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:36.784 16:14:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:36.784 16:14:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:36.784 16:14:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:36.784 16:14:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.784 16:14:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.784 16:14:45 -- setup/hugepages.sh@83 -- # : 512 00:03:36.784 16:14:45 -- setup/hugepages.sh@84 -- # : 1 00:03:36.784 16:14:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.784 16:14:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:36.784 16:14:45 -- setup/hugepages.sh@83 -- # : 0 00:03:36.784 16:14:45 -- setup/hugepages.sh@84 -- # : 0 00:03:36.784 16:14:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:36.784 16:14:45 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:36.784 16:14:45 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:36.784 16:14:45 -- setup/hugepages.sh@153 -- # setup output 00:03:36.784 16:14:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.784 16:14:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:40.083 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:40.083 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:40.083 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:40.083 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.083 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.083 16:14:48 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:40.083 16:14:48 -- setup/hugepages.sh@89 -- # local node 00:03:40.083 16:14:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.083 16:14:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.083 16:14:48 -- setup/hugepages.sh@92 -- # local surp 00:03:40.083 16:14:48 -- setup/hugepages.sh@93 -- # local resv 00:03:40.083 16:14:48 -- setup/hugepages.sh@94 -- # local anon 00:03:40.083 16:14:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.083 16:14:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.083 16:14:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.083 16:14:48 -- setup/common.sh@18 -- # local node= 00:03:40.083 16:14:48 -- setup/common.sh@19 -- # local var val 00:03:40.083 16:14:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.083 16:14:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.083 16:14:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.083 16:14:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.083 16:14:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.083 16:14:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49811868 kB' 'MemAvailable: 50229224 kB' 'Buffers: 1064 kB' 'Cached: 11189456 kB' 'SwapCached: 0 kB' 'Active: 11441132 kB' 'Inactive: 264060 kB' 'Active(anon): 10867512 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517964 kB' 'Mapped: 147564 kB' 'Shmem: 10352840 kB' 'KReclaimable: 184636 kB' 'Slab: 537220 kB' 'SReclaimable: 184636 kB' 'SUnreclaim: 352584 kB' 'KernelStack: 16736 kB' 'PageTables: 10312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12319628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200036 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.083 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.083 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.084 16:14:48 -- setup/common.sh@33 -- # echo 0 00:03:40.084 16:14:48 -- setup/common.sh@33 -- # return 0 00:03:40.084 16:14:48 -- setup/hugepages.sh@97 -- # anon=0 00:03:40.084 16:14:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.084 16:14:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.084 16:14:48 -- setup/common.sh@18 -- # local node= 00:03:40.084 16:14:48 -- setup/common.sh@19 -- # local var val 00:03:40.084 16:14:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.084 16:14:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.084 16:14:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.084 16:14:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.084 16:14:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.084 16:14:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49812672 kB' 'MemAvailable: 50230024 kB' 'Buffers: 1064 kB' 'Cached: 11189460 kB' 'SwapCached: 0 kB' 'Active: 11438948 kB' 'Inactive: 264060 kB' 'Active(anon): 10865328 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515344 kB' 'Mapped: 147616 kB' 'Shmem: 10352844 kB' 'KReclaimable: 184628 kB' 'Slab: 537056 kB' 'SReclaimable: 184628 kB' 'SUnreclaim: 352428 kB' 'KernelStack: 16336 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12318616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199972 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.084 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.084 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.085 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.085 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.086 16:14:48 -- setup/common.sh@33 -- # echo 0 00:03:40.086 16:14:48 -- setup/common.sh@33 -- # return 0 00:03:40.086 16:14:48 -- setup/hugepages.sh@99 -- # surp=0 00:03:40.086 16:14:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.086 16:14:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.086 16:14:48 -- setup/common.sh@18 -- # local node= 00:03:40.086 16:14:48 -- setup/common.sh@19 -- # local var val 00:03:40.086 16:14:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.086 16:14:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.086 16:14:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.086 16:14:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.086 16:14:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.086 16:14:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49814356 kB' 'MemAvailable: 50231708 kB' 'Buffers: 1064 kB' 'Cached: 11189472 kB' 'SwapCached: 0 kB' 'Active: 11438092 kB' 'Inactive: 264060 kB' 'Active(anon): 10864472 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514972 kB' 'Mapped: 147540 kB' 'Shmem: 10352856 kB' 'KReclaimable: 184628 kB' 'Slab: 537096 kB' 'SReclaimable: 184628 kB' 'SUnreclaim: 352468 kB' 'KernelStack: 16112 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12318632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199940 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.086 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.086 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.087 16:14:48 -- setup/common.sh@33 -- # echo 0 00:03:40.087 16:14:48 -- setup/common.sh@33 -- # return 0 00:03:40.087 16:14:48 -- setup/hugepages.sh@100 -- # resv=0 00:03:40.087 16:14:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.087 nr_hugepages=1024 00:03:40.087 16:14:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.087 resv_hugepages=0 00:03:40.087 16:14:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.087 surplus_hugepages=0 00:03:40.087 16:14:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.087 anon_hugepages=0 00:03:40.087 16:14:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.087 16:14:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.087 16:14:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.087 16:14:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.087 16:14:48 -- setup/common.sh@18 -- # local node= 00:03:40.087 16:14:48 -- setup/common.sh@19 -- # local var val 00:03:40.087 16:14:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.087 16:14:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.087 16:14:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.087 16:14:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.087 16:14:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.087 16:14:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49814828 kB' 'MemAvailable: 50232180 kB' 'Buffers: 1064 kB' 'Cached: 11189488 kB' 'SwapCached: 0 kB' 'Active: 11438100 kB' 'Inactive: 264060 kB' 'Active(anon): 10864480 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514968 kB' 'Mapped: 147540 kB' 'Shmem: 10352872 kB' 'KReclaimable: 184628 kB' 'Slab: 537096 kB' 'SReclaimable: 184628 kB' 'SUnreclaim: 352468 kB' 'KernelStack: 16112 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12318648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199940 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.087 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.087 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.088 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.088 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.089 16:14:48 -- setup/common.sh@33 -- # echo 1024 00:03:40.089 16:14:48 -- setup/common.sh@33 -- # return 0 00:03:40.089 16:14:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.089 16:14:48 -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.089 16:14:48 -- setup/hugepages.sh@27 -- # local node 00:03:40.089 16:14:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.089 16:14:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.089 16:14:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.089 16:14:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:40.089 16:14:48 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.089 16:14:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.089 16:14:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.089 16:14:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.089 16:14:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.089 16:14:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.089 16:14:48 -- setup/common.sh@18 -- # local node=0 00:03:40.089 16:14:48 -- setup/common.sh@19 -- # local var val 00:03:40.089 16:14:48 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.089 16:14:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.089 16:14:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.089 16:14:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.089 16:14:48 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.089 16:14:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.089 16:14:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634076 kB' 'MemFree: 23067028 kB' 'MemUsed: 9567048 kB' 'SwapCached: 0 kB' 'Active: 7268176 kB' 'Inactive: 65060 kB' 'Active(anon): 6889952 kB' 'Inactive(anon): 0 kB' 'Active(file): 378224 kB' 'Inactive(file): 65060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7183976 kB' 'Mapped: 71564 kB' 'AnonPages: 152568 kB' 'Shmem: 6740692 kB' 'KernelStack: 8536 kB' 'PageTables: 2748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87332 kB' 'Slab: 290728 kB' 'SReclaimable: 87332 kB' 'SUnreclaim: 203396 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:48 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.089 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.089 16:14:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@33 -- # echo 0 00:03:40.090 16:14:49 -- setup/common.sh@33 -- # return 0 00:03:40.090 16:14:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.090 16:14:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.090 16:14:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.090 16:14:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:40.090 16:14:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.090 16:14:49 -- setup/common.sh@18 -- # local node=1 00:03:40.090 16:14:49 -- setup/common.sh@19 -- # local var val 00:03:40.090 16:14:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:40.090 16:14:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.090 16:14:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:40.090 16:14:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:40.090 16:14:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.090 16:14:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.090 16:14:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32822464 kB' 'MemFree: 26744568 kB' 'MemUsed: 6077896 kB' 'SwapCached: 0 kB' 'Active: 4172912 kB' 'Inactive: 199000 kB' 'Active(anon): 3977516 kB' 'Inactive(anon): 0 kB' 'Active(file): 195396 kB' 'Inactive(file): 199000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4006604 kB' 'Mapped: 76480 kB' 'AnonPages: 365376 kB' 'Shmem: 3612208 kB' 'KernelStack: 7544 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97296 kB' 'Slab: 246368 kB' 'SReclaimable: 97296 kB' 'SUnreclaim: 149072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.090 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.090 16:14:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # continue 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:40.091 16:14:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:40.091 16:14:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.091 16:14:49 -- setup/common.sh@33 -- # echo 0 00:03:40.091 16:14:49 -- setup/common.sh@33 -- # return 0 00:03:40.091 16:14:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.091 16:14:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.091 16:14:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.091 16:14:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.091 16:14:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:40.091 node0=512 expecting 512 00:03:40.091 16:14:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.091 16:14:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.091 16:14:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.091 16:14:49 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:40.091 node1=512 expecting 512 00:03:40.091 16:14:49 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:40.091 00:03:40.091 real 0m3.372s 00:03:40.091 user 0m1.209s 00:03:40.091 sys 0m2.151s 00:03:40.091 16:14:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:40.091 16:14:49 -- common/autotest_common.sh@10 -- # set +x 00:03:40.091 ************************************ 00:03:40.091 END TEST even_2G_alloc 00:03:40.091 ************************************ 00:03:40.091 16:14:49 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:40.091 16:14:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:40.091 16:14:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:40.091 16:14:49 -- common/autotest_common.sh@10 -- # set +x 00:03:40.350 ************************************ 00:03:40.350 START TEST odd_alloc 00:03:40.350 ************************************ 00:03:40.350 16:14:49 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:40.350 16:14:49 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:40.350 16:14:49 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:40.350 16:14:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:40.350 16:14:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:40.350 16:14:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:40.350 16:14:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:40.350 16:14:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:40.350 16:14:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:40.350 16:14:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:40.350 16:14:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:40.350 16:14:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:40.350 16:14:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:40.350 16:14:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:40.350 16:14:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:40.350 16:14:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.350 16:14:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:40.350 16:14:49 -- setup/hugepages.sh@83 -- # : 513 00:03:40.350 16:14:49 -- setup/hugepages.sh@84 -- # : 1 00:03:40.350 16:14:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.350 16:14:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:40.350 16:14:49 -- setup/hugepages.sh@83 -- # : 0 00:03:40.350 16:14:49 -- setup/hugepages.sh@84 -- # : 0 00:03:40.350 16:14:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:40.350 16:14:49 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:40.350 16:14:49 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:40.350 16:14:49 -- setup/hugepages.sh@160 -- # setup output 00:03:40.350 16:14:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.350 16:14:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:43.650 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:43.650 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:43.650 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:43.650 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:43.650 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:43.650 16:14:52 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:43.650 16:14:52 -- setup/hugepages.sh@89 -- # local node 00:03:43.650 16:14:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:43.650 16:14:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:43.650 16:14:52 -- setup/hugepages.sh@92 -- # local surp 00:03:43.650 16:14:52 -- setup/hugepages.sh@93 -- # local resv 00:03:43.650 16:14:52 -- setup/hugepages.sh@94 -- # local anon 00:03:43.650 16:14:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:43.650 16:14:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:43.650 16:14:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:43.650 16:14:52 -- setup/common.sh@18 -- # local node= 00:03:43.650 16:14:52 -- setup/common.sh@19 -- # local var val 00:03:43.650 16:14:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.650 16:14:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.650 16:14:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.650 16:14:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.650 16:14:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.650 16:14:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49803328 kB' 'MemAvailable: 50220676 kB' 'Buffers: 1064 kB' 'Cached: 11189572 kB' 'SwapCached: 0 kB' 'Active: 11442828 kB' 'Inactive: 264060 kB' 'Active(anon): 10869208 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519536 kB' 'Mapped: 147608 kB' 'Shmem: 10352956 kB' 'KReclaimable: 184620 kB' 'Slab: 536788 kB' 'SReclaimable: 184620 kB' 'SUnreclaim: 352168 kB' 'KernelStack: 16160 kB' 'PageTables: 7524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40067272 kB' 'Committed_AS: 12320992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199924 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.650 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.650 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:43.651 16:14:52 -- setup/common.sh@33 -- # echo 0 00:03:43.651 16:14:52 -- setup/common.sh@33 -- # return 0 00:03:43.651 16:14:52 -- setup/hugepages.sh@97 -- # anon=0 00:03:43.651 16:14:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:43.651 16:14:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.651 16:14:52 -- setup/common.sh@18 -- # local node= 00:03:43.651 16:14:52 -- setup/common.sh@19 -- # local var val 00:03:43.651 16:14:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.651 16:14:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.651 16:14:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.651 16:14:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.651 16:14:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.651 16:14:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.651 16:14:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49804380 kB' 'MemAvailable: 50221728 kB' 'Buffers: 1064 kB' 'Cached: 11189588 kB' 'SwapCached: 0 kB' 'Active: 11443272 kB' 'Inactive: 264060 kB' 'Active(anon): 10869652 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520092 kB' 'Mapped: 147572 kB' 'Shmem: 10352972 kB' 'KReclaimable: 184620 kB' 'Slab: 536796 kB' 'SReclaimable: 184620 kB' 'SUnreclaim: 352176 kB' 'KernelStack: 16160 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40067272 kB' 'Committed_AS: 12321364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199876 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.651 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.651 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.652 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.652 16:14:52 -- setup/common.sh@33 -- # echo 0 00:03:43.652 16:14:52 -- setup/common.sh@33 -- # return 0 00:03:43.652 16:14:52 -- setup/hugepages.sh@99 -- # surp=0 00:03:43.652 16:14:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:43.652 16:14:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:43.652 16:14:52 -- setup/common.sh@18 -- # local node= 00:03:43.652 16:14:52 -- setup/common.sh@19 -- # local var val 00:03:43.652 16:14:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.652 16:14:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.652 16:14:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.652 16:14:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.652 16:14:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.652 16:14:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.652 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49809556 kB' 'MemAvailable: 50226904 kB' 'Buffers: 1064 kB' 'Cached: 11189592 kB' 'SwapCached: 0 kB' 'Active: 11442736 kB' 'Inactive: 264060 kB' 'Active(anon): 10869116 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519448 kB' 'Mapped: 147580 kB' 'Shmem: 10352976 kB' 'KReclaimable: 184620 kB' 'Slab: 536884 kB' 'SReclaimable: 184620 kB' 'SUnreclaim: 352264 kB' 'KernelStack: 16208 kB' 'PageTables: 7420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40067272 kB' 'Committed_AS: 12321376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199972 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.653 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.653 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.916 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.916 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:43.917 16:14:52 -- setup/common.sh@33 -- # echo 0 00:03:43.917 16:14:52 -- setup/common.sh@33 -- # return 0 00:03:43.917 16:14:52 -- setup/hugepages.sh@100 -- # resv=0 00:03:43.917 16:14:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:43.917 nr_hugepages=1025 00:03:43.917 16:14:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:43.917 resv_hugepages=0 00:03:43.917 16:14:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:43.917 surplus_hugepages=0 00:03:43.917 16:14:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:43.917 anon_hugepages=0 00:03:43.917 16:14:52 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:43.917 16:14:52 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:43.917 16:14:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:43.917 16:14:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:43.917 16:14:52 -- setup/common.sh@18 -- # local node= 00:03:43.917 16:14:52 -- setup/common.sh@19 -- # local var val 00:03:43.917 16:14:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.917 16:14:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.917 16:14:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:43.917 16:14:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:43.917 16:14:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.917 16:14:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49810468 kB' 'MemAvailable: 50227816 kB' 'Buffers: 1064 kB' 'Cached: 11189608 kB' 'SwapCached: 0 kB' 'Active: 11443828 kB' 'Inactive: 264060 kB' 'Active(anon): 10870208 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520528 kB' 'Mapped: 147572 kB' 'Shmem: 10352992 kB' 'KReclaimable: 184620 kB' 'Slab: 536884 kB' 'SReclaimable: 184620 kB' 'SUnreclaim: 352264 kB' 'KernelStack: 16336 kB' 'PageTables: 7728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40067272 kB' 'Committed_AS: 12321392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200004 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.917 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.917 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.918 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.918 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:43.918 16:14:52 -- setup/common.sh@33 -- # echo 1025 00:03:43.919 16:14:52 -- setup/common.sh@33 -- # return 0 00:03:43.919 16:14:52 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:43.919 16:14:52 -- setup/hugepages.sh@112 -- # get_nodes 00:03:43.919 16:14:52 -- setup/hugepages.sh@27 -- # local node 00:03:43.919 16:14:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.919 16:14:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:43.919 16:14:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:43.919 16:14:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:43.919 16:14:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:43.919 16:14:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:43.919 16:14:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.919 16:14:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.919 16:14:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:43.919 16:14:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.919 16:14:52 -- setup/common.sh@18 -- # local node=0 00:03:43.919 16:14:52 -- setup/common.sh@19 -- # local var val 00:03:43.919 16:14:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.919 16:14:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.919 16:14:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:43.919 16:14:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:43.919 16:14:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.919 16:14:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634076 kB' 'MemFree: 23070620 kB' 'MemUsed: 9563456 kB' 'SwapCached: 0 kB' 'Active: 7272756 kB' 'Inactive: 65060 kB' 'Active(anon): 6894532 kB' 'Inactive(anon): 0 kB' 'Active(file): 378224 kB' 'Inactive(file): 65060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7184048 kB' 'Mapped: 71592 kB' 'AnonPages: 157008 kB' 'Shmem: 6740764 kB' 'KernelStack: 8552 kB' 'PageTables: 2652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87332 kB' 'Slab: 290468 kB' 'SReclaimable: 87332 kB' 'SUnreclaim: 203136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.919 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.919 16:14:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@33 -- # echo 0 00:03:43.920 16:14:52 -- setup/common.sh@33 -- # return 0 00:03:43.920 16:14:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.920 16:14:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:43.920 16:14:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:43.920 16:14:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:43.920 16:14:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:43.920 16:14:52 -- setup/common.sh@18 -- # local node=1 00:03:43.920 16:14:52 -- setup/common.sh@19 -- # local var val 00:03:43.920 16:14:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:43.920 16:14:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:43.920 16:14:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:43.920 16:14:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:43.920 16:14:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:43.920 16:14:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32822464 kB' 'MemFree: 26738536 kB' 'MemUsed: 6083928 kB' 'SwapCached: 0 kB' 'Active: 4170416 kB' 'Inactive: 199000 kB' 'Active(anon): 3975020 kB' 'Inactive(anon): 0 kB' 'Active(file): 195396 kB' 'Inactive(file): 199000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4006636 kB' 'Mapped: 75980 kB' 'AnonPages: 362856 kB' 'Shmem: 3612240 kB' 'KernelStack: 7608 kB' 'PageTables: 4696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97288 kB' 'Slab: 246392 kB' 'SReclaimable: 97288 kB' 'SUnreclaim: 149104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.920 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.920 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # continue 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:43.921 16:14:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:43.921 16:14:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:43.921 16:14:52 -- setup/common.sh@33 -- # echo 0 00:03:43.921 16:14:52 -- setup/common.sh@33 -- # return 0 00:03:43.921 16:14:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:43.921 16:14:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.921 16:14:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.921 16:14:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.921 16:14:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:43.921 node0=512 expecting 513 00:03:43.921 16:14:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:43.921 16:14:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:43.921 16:14:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:43.921 16:14:52 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:43.921 node1=513 expecting 512 00:03:43.921 16:14:52 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:43.921 00:03:43.921 real 0m3.574s 00:03:43.921 user 0m1.364s 00:03:43.921 sys 0m2.231s 00:03:43.921 16:14:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:43.921 16:14:52 -- common/autotest_common.sh@10 -- # set +x 00:03:43.921 ************************************ 00:03:43.921 END TEST odd_alloc 00:03:43.921 ************************************ 00:03:43.921 16:14:52 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:43.921 16:14:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:43.921 16:14:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:43.921 16:14:52 -- common/autotest_common.sh@10 -- # set +x 00:03:44.182 ************************************ 00:03:44.182 START TEST custom_alloc 00:03:44.182 ************************************ 00:03:44.182 16:14:52 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:44.182 16:14:52 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:44.182 16:14:52 -- setup/hugepages.sh@169 -- # local node 00:03:44.182 16:14:52 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:44.182 16:14:52 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:44.182 16:14:52 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:44.182 16:14:52 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:44.182 16:14:52 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:44.182 16:14:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:44.182 16:14:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.182 16:14:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.182 16:14:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.182 16:14:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:44.182 16:14:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.182 16:14:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.182 16:14:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.182 16:14:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:44.182 16:14:52 -- setup/hugepages.sh@83 -- # : 256 00:03:44.182 16:14:52 -- setup/hugepages.sh@84 -- # : 1 00:03:44.182 16:14:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:44.182 16:14:52 -- setup/hugepages.sh@83 -- # : 0 00:03:44.182 16:14:52 -- setup/hugepages.sh@84 -- # : 0 00:03:44.182 16:14:52 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:44.182 16:14:52 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:44.182 16:14:52 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:44.182 16:14:52 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:44.182 16:14:52 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:44.182 16:14:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.182 16:14:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.182 16:14:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.182 16:14:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.182 16:14:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.182 16:14:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.182 16:14:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.182 16:14:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:44.182 16:14:52 -- setup/hugepages.sh@78 -- # return 0 00:03:44.182 16:14:52 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:44.182 16:14:52 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:44.182 16:14:52 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:44.182 16:14:52 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:44.182 16:14:52 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:44.182 16:14:52 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:44.182 16:14:52 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:44.182 16:14:52 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:44.182 16:14:52 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:44.182 16:14:52 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:44.182 16:14:52 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:44.182 16:14:52 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:44.182 16:14:52 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:44.182 16:14:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.182 16:14:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:44.182 16:14:52 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:44.182 16:14:52 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:44.182 16:14:52 -- setup/hugepages.sh@78 -- # return 0 00:03:44.182 16:14:52 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:44.182 16:14:52 -- setup/hugepages.sh@187 -- # setup output 00:03:44.182 16:14:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.182 16:14:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:47.482 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:47.482 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:47.482 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:47.482 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:47.482 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:47.482 16:14:56 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:47.482 16:14:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:47.482 16:14:56 -- setup/hugepages.sh@89 -- # local node 00:03:47.482 16:14:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.482 16:14:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.482 16:14:56 -- setup/hugepages.sh@92 -- # local surp 00:03:47.482 16:14:56 -- setup/hugepages.sh@93 -- # local resv 00:03:47.482 16:14:56 -- setup/hugepages.sh@94 -- # local anon 00:03:47.482 16:14:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.482 16:14:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.482 16:14:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.482 16:14:56 -- setup/common.sh@18 -- # local node= 00:03:47.482 16:14:56 -- setup/common.sh@19 -- # local var val 00:03:47.483 16:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.483 16:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.483 16:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.483 16:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.483 16:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.483 16:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 48729720 kB' 'MemAvailable: 49147068 kB' 'Buffers: 1064 kB' 'Cached: 11189700 kB' 'SwapCached: 0 kB' 'Active: 11447288 kB' 'Inactive: 264060 kB' 'Active(anon): 10873668 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523392 kB' 'Mapped: 147708 kB' 'Shmem: 10353084 kB' 'KReclaimable: 184620 kB' 'Slab: 536376 kB' 'SReclaimable: 184620 kB' 'SUnreclaim: 351756 kB' 'KernelStack: 16144 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39544008 kB' 'Committed_AS: 12319760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200004 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.483 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.483 16:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.484 16:14:56 -- setup/common.sh@33 -- # echo 0 00:03:47.484 16:14:56 -- setup/common.sh@33 -- # return 0 00:03:47.484 16:14:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:47.484 16:14:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.484 16:14:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.484 16:14:56 -- setup/common.sh@18 -- # local node= 00:03:47.484 16:14:56 -- setup/common.sh@19 -- # local var val 00:03:47.484 16:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.484 16:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.484 16:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.484 16:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.484 16:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.484 16:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 48729796 kB' 'MemAvailable: 49147144 kB' 'Buffers: 1064 kB' 'Cached: 11189704 kB' 'SwapCached: 0 kB' 'Active: 11446472 kB' 'Inactive: 264060 kB' 'Active(anon): 10872852 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522992 kB' 'Mapped: 147600 kB' 'Shmem: 10353088 kB' 'KReclaimable: 184620 kB' 'Slab: 536340 kB' 'SReclaimable: 184620 kB' 'SUnreclaim: 351720 kB' 'KernelStack: 16144 kB' 'PageTables: 7444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39544008 kB' 'Committed_AS: 12319772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199972 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.484 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.484 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.485 16:14:56 -- setup/common.sh@33 -- # echo 0 00:03:47.485 16:14:56 -- setup/common.sh@33 -- # return 0 00:03:47.485 16:14:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:47.485 16:14:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.485 16:14:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.485 16:14:56 -- setup/common.sh@18 -- # local node= 00:03:47.485 16:14:56 -- setup/common.sh@19 -- # local var val 00:03:47.485 16:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.485 16:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.485 16:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.485 16:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.485 16:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.485 16:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 48729796 kB' 'MemAvailable: 49147144 kB' 'Buffers: 1064 kB' 'Cached: 11189704 kB' 'SwapCached: 0 kB' 'Active: 11446512 kB' 'Inactive: 264060 kB' 'Active(anon): 10872892 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523020 kB' 'Mapped: 147600 kB' 'Shmem: 10353088 kB' 'KReclaimable: 184620 kB' 'Slab: 536340 kB' 'SReclaimable: 184620 kB' 'SUnreclaim: 351720 kB' 'KernelStack: 16160 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39544008 kB' 'Committed_AS: 12319784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199972 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.485 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.485 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.486 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.486 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.487 16:14:56 -- setup/common.sh@33 -- # echo 0 00:03:47.487 16:14:56 -- setup/common.sh@33 -- # return 0 00:03:47.487 16:14:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:47.487 16:14:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:47.487 nr_hugepages=1536 00:03:47.487 16:14:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.487 resv_hugepages=0 00:03:47.487 16:14:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.487 surplus_hugepages=0 00:03:47.487 16:14:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.487 anon_hugepages=0 00:03:47.487 16:14:56 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:47.487 16:14:56 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:47.487 16:14:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.487 16:14:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.487 16:14:56 -- setup/common.sh@18 -- # local node= 00:03:47.487 16:14:56 -- setup/common.sh@19 -- # local var val 00:03:47.487 16:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.487 16:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.487 16:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.487 16:14:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.487 16:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.487 16:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 48730300 kB' 'MemAvailable: 49147648 kB' 'Buffers: 1064 kB' 'Cached: 11189704 kB' 'SwapCached: 0 kB' 'Active: 11446540 kB' 'Inactive: 264060 kB' 'Active(anon): 10872920 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523048 kB' 'Mapped: 147600 kB' 'Shmem: 10353088 kB' 'KReclaimable: 184620 kB' 'Slab: 536340 kB' 'SReclaimable: 184620 kB' 'SUnreclaim: 351720 kB' 'KernelStack: 16176 kB' 'PageTables: 7540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 39544008 kB' 'Committed_AS: 12319800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199972 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.487 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.487 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.488 16:14:56 -- setup/common.sh@33 -- # echo 1536 00:03:47.488 16:14:56 -- setup/common.sh@33 -- # return 0 00:03:47.488 16:14:56 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:47.488 16:14:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.488 16:14:56 -- setup/hugepages.sh@27 -- # local node 00:03:47.488 16:14:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.488 16:14:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:47.488 16:14:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.488 16:14:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.488 16:14:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.488 16:14:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.488 16:14:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.488 16:14:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.488 16:14:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.488 16:14:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.488 16:14:56 -- setup/common.sh@18 -- # local node=0 00:03:47.488 16:14:56 -- setup/common.sh@19 -- # local var val 00:03:47.488 16:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.488 16:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.488 16:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.488 16:14:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.488 16:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.488 16:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634076 kB' 'MemFree: 23055432 kB' 'MemUsed: 9578644 kB' 'SwapCached: 0 kB' 'Active: 7276112 kB' 'Inactive: 65060 kB' 'Active(anon): 6897888 kB' 'Inactive(anon): 0 kB' 'Active(file): 378224 kB' 'Inactive(file): 65060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7184156 kB' 'Mapped: 71620 kB' 'AnonPages: 160240 kB' 'Shmem: 6740872 kB' 'KernelStack: 8552 kB' 'PageTables: 2612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87332 kB' 'Slab: 290128 kB' 'SReclaimable: 87332 kB' 'SUnreclaim: 202796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.488 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.488 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.489 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.489 16:14:56 -- setup/common.sh@33 -- # echo 0 00:03:47.489 16:14:56 -- setup/common.sh@33 -- # return 0 00:03:47.489 16:14:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.489 16:14:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.489 16:14:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.489 16:14:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:47.489 16:14:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.489 16:14:56 -- setup/common.sh@18 -- # local node=1 00:03:47.489 16:14:56 -- setup/common.sh@19 -- # local var val 00:03:47.489 16:14:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:47.489 16:14:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.489 16:14:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:47.489 16:14:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:47.489 16:14:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.489 16:14:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.489 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32822464 kB' 'MemFree: 25678416 kB' 'MemUsed: 7144048 kB' 'SwapCached: 0 kB' 'Active: 4170560 kB' 'Inactive: 199000 kB' 'Active(anon): 3975164 kB' 'Inactive(anon): 0 kB' 'Active(file): 195396 kB' 'Inactive(file): 199000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4006652 kB' 'Mapped: 75980 kB' 'AnonPages: 363040 kB' 'Shmem: 3612256 kB' 'KernelStack: 7592 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97288 kB' 'Slab: 246196 kB' 'SReclaimable: 97288 kB' 'SUnreclaim: 148908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # continue 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:47.490 16:14:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:47.490 16:14:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.490 16:14:56 -- setup/common.sh@33 -- # echo 0 00:03:47.490 16:14:56 -- setup/common.sh@33 -- # return 0 00:03:47.490 16:14:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.490 16:14:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.490 16:14:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.490 16:14:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.490 16:14:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:47.490 node0=512 expecting 512 00:03:47.490 16:14:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.490 16:14:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.490 16:14:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.490 16:14:56 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:47.490 node1=1024 expecting 1024 00:03:47.490 16:14:56 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:47.490 00:03:47.490 real 0m3.236s 00:03:47.490 user 0m1.180s 00:03:47.490 sys 0m2.034s 00:03:47.491 16:14:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:47.491 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:03:47.491 ************************************ 00:03:47.491 END TEST custom_alloc 00:03:47.491 ************************************ 00:03:47.491 16:14:56 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:47.491 16:14:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:47.491 16:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:47.491 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:03:47.491 ************************************ 00:03:47.491 START TEST no_shrink_alloc 00:03:47.491 ************************************ 00:03:47.491 16:14:56 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:47.491 16:14:56 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:47.491 16:14:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:47.491 16:14:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:47.491 16:14:56 -- setup/hugepages.sh@51 -- # shift 00:03:47.491 16:14:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:47.491 16:14:56 -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.491 16:14:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.491 16:14:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:47.491 16:14:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:47.491 16:14:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:47.491 16:14:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.491 16:14:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:47.491 16:14:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.491 16:14:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.491 16:14:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.491 16:14:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:47.491 16:14:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.491 16:14:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:47.491 16:14:56 -- setup/hugepages.sh@73 -- # return 0 00:03:47.491 16:14:56 -- setup/hugepages.sh@198 -- # setup output 00:03:47.491 16:14:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.491 16:14:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:50.817 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:50.817 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:50.817 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:50.817 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:50.817 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:50.817 16:14:59 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:50.817 16:14:59 -- setup/hugepages.sh@89 -- # local node 00:03:50.817 16:14:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.817 16:14:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.817 16:14:59 -- setup/hugepages.sh@92 -- # local surp 00:03:50.817 16:14:59 -- setup/hugepages.sh@93 -- # local resv 00:03:50.817 16:14:59 -- setup/hugepages.sh@94 -- # local anon 00:03:50.817 16:14:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.817 16:14:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.817 16:14:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.817 16:14:59 -- setup/common.sh@18 -- # local node= 00:03:50.817 16:14:59 -- setup/common.sh@19 -- # local var val 00:03:50.817 16:14:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.817 16:14:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.817 16:14:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.817 16:14:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.817 16:14:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.817 16:14:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.817 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.817 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49741560 kB' 'MemAvailable: 50158924 kB' 'Buffers: 1064 kB' 'Cached: 11189816 kB' 'SwapCached: 0 kB' 'Active: 11451460 kB' 'Inactive: 264060 kB' 'Active(anon): 10877840 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527928 kB' 'Mapped: 147652 kB' 'Shmem: 10353200 kB' 'KReclaimable: 184652 kB' 'Slab: 536992 kB' 'SReclaimable: 184652 kB' 'SUnreclaim: 352340 kB' 'KernelStack: 16368 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12322832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200100 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.818 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.818 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.819 16:14:59 -- setup/common.sh@33 -- # echo 0 00:03:50.819 16:14:59 -- setup/common.sh@33 -- # return 0 00:03:50.819 16:14:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:50.819 16:14:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.819 16:14:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.819 16:14:59 -- setup/common.sh@18 -- # local node= 00:03:50.819 16:14:59 -- setup/common.sh@19 -- # local var val 00:03:50.819 16:14:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.819 16:14:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.819 16:14:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.819 16:14:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.819 16:14:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.819 16:14:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49742852 kB' 'MemAvailable: 50160216 kB' 'Buffers: 1064 kB' 'Cached: 11189816 kB' 'SwapCached: 0 kB' 'Active: 11450996 kB' 'Inactive: 264060 kB' 'Active(anon): 10877376 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527468 kB' 'Mapped: 147704 kB' 'Shmem: 10353200 kB' 'KReclaimable: 184652 kB' 'Slab: 537184 kB' 'SReclaimable: 184652 kB' 'SUnreclaim: 352532 kB' 'KernelStack: 16320 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12322844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200180 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.819 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.819 16:14:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.820 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.820 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.821 16:14:59 -- setup/common.sh@33 -- # echo 0 00:03:50.821 16:14:59 -- setup/common.sh@33 -- # return 0 00:03:50.821 16:14:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:50.821 16:14:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.821 16:14:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.821 16:14:59 -- setup/common.sh@18 -- # local node= 00:03:50.821 16:14:59 -- setup/common.sh@19 -- # local var val 00:03:50.821 16:14:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.821 16:14:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.821 16:14:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.821 16:14:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.821 16:14:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.821 16:14:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49741852 kB' 'MemAvailable: 50159216 kB' 'Buffers: 1064 kB' 'Cached: 11189816 kB' 'SwapCached: 0 kB' 'Active: 11451640 kB' 'Inactive: 264060 kB' 'Active(anon): 10878020 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528200 kB' 'Mapped: 147704 kB' 'Shmem: 10353200 kB' 'KReclaimable: 184652 kB' 'Slab: 537184 kB' 'SReclaimable: 184652 kB' 'SUnreclaim: 352532 kB' 'KernelStack: 16432 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12322676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200212 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.821 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.821 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.822 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.822 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.823 16:14:59 -- setup/common.sh@33 -- # echo 0 00:03:50.823 16:14:59 -- setup/common.sh@33 -- # return 0 00:03:50.823 16:14:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:50.823 16:14:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.823 nr_hugepages=1024 00:03:50.823 16:14:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.823 resv_hugepages=0 00:03:50.823 16:14:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.823 surplus_hugepages=0 00:03:50.823 16:14:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.823 anon_hugepages=0 00:03:50.823 16:14:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.823 16:14:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.823 16:14:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.823 16:14:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.823 16:14:59 -- setup/common.sh@18 -- # local node= 00:03:50.823 16:14:59 -- setup/common.sh@19 -- # local var val 00:03:50.823 16:14:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.823 16:14:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.823 16:14:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.823 16:14:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.823 16:14:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.823 16:14:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49740244 kB' 'MemAvailable: 50157608 kB' 'Buffers: 1064 kB' 'Cached: 11189844 kB' 'SwapCached: 0 kB' 'Active: 11451352 kB' 'Inactive: 264060 kB' 'Active(anon): 10877732 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527748 kB' 'Mapped: 147620 kB' 'Shmem: 10353228 kB' 'KReclaimable: 184652 kB' 'Slab: 537184 kB' 'SReclaimable: 184652 kB' 'SUnreclaim: 352532 kB' 'KernelStack: 16528 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12322872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200212 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.823 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.823 16:14:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.824 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.824 16:14:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.825 16:14:59 -- setup/common.sh@33 -- # echo 1024 00:03:50.825 16:14:59 -- setup/common.sh@33 -- # return 0 00:03:50.825 16:14:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.825 16:14:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.825 16:14:59 -- setup/hugepages.sh@27 -- # local node 00:03:50.825 16:14:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.825 16:14:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.825 16:14:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.825 16:14:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.825 16:14:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.825 16:14:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.825 16:14:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.825 16:14:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.825 16:14:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.825 16:14:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.825 16:14:59 -- setup/common.sh@18 -- # local node=0 00:03:50.825 16:14:59 -- setup/common.sh@19 -- # local var val 00:03:50.825 16:14:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:50.825 16:14:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.825 16:14:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.825 16:14:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.825 16:14:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.825 16:14:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634076 kB' 'MemFree: 21997608 kB' 'MemUsed: 10636468 kB' 'SwapCached: 0 kB' 'Active: 7279988 kB' 'Inactive: 65060 kB' 'Active(anon): 6901764 kB' 'Inactive(anon): 0 kB' 'Active(file): 378224 kB' 'Inactive(file): 65060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7184260 kB' 'Mapped: 71648 kB' 'AnonPages: 163936 kB' 'Shmem: 6740976 kB' 'KernelStack: 8520 kB' 'PageTables: 2476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87364 kB' 'Slab: 291020 kB' 'SReclaimable: 87364 kB' 'SUnreclaim: 203656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.825 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.825 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # continue 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:50.826 16:14:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:50.826 16:14:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.826 16:14:59 -- setup/common.sh@33 -- # echo 0 00:03:50.826 16:14:59 -- setup/common.sh@33 -- # return 0 00:03:50.826 16:14:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.826 16:14:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.826 16:14:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.826 16:14:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.826 16:14:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.826 node0=1024 expecting 1024 00:03:50.826 16:14:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.826 16:14:59 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:50.826 16:14:59 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:50.826 16:14:59 -- setup/hugepages.sh@202 -- # setup output 00:03:50.826 16:14:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.826 16:14:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:54.118 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:03:54.118 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:03:54.118 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:03:54.118 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.118 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.118 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:54.118 16:15:02 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:54.118 16:15:02 -- setup/hugepages.sh@89 -- # local node 00:03:54.118 16:15:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.118 16:15:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.118 16:15:02 -- setup/hugepages.sh@92 -- # local surp 00:03:54.118 16:15:02 -- setup/hugepages.sh@93 -- # local resv 00:03:54.118 16:15:02 -- setup/hugepages.sh@94 -- # local anon 00:03:54.118 16:15:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.118 16:15:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.118 16:15:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.118 16:15:02 -- setup/common.sh@18 -- # local node= 00:03:54.118 16:15:02 -- setup/common.sh@19 -- # local var val 00:03:54.118 16:15:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.118 16:15:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.118 16:15:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.118 16:15:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.118 16:15:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.118 16:15:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49686172 kB' 'MemAvailable: 50103536 kB' 'Buffers: 1064 kB' 'Cached: 11189908 kB' 'SwapCached: 0 kB' 'Active: 11459796 kB' 'Inactive: 264060 kB' 'Active(anon): 10886176 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535748 kB' 'Mapped: 148672 kB' 'Shmem: 10353292 kB' 'KReclaimable: 184652 kB' 'Slab: 536628 kB' 'SReclaimable: 184652 kB' 'SUnreclaim: 351976 kB' 'KernelStack: 16304 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12329940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199992 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.118 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.118 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.119 16:15:02 -- setup/common.sh@33 -- # echo 0 00:03:54.119 16:15:02 -- setup/common.sh@33 -- # return 0 00:03:54.119 16:15:02 -- setup/hugepages.sh@97 -- # anon=0 00:03:54.119 16:15:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.119 16:15:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.119 16:15:02 -- setup/common.sh@18 -- # local node= 00:03:54.119 16:15:02 -- setup/common.sh@19 -- # local var val 00:03:54.119 16:15:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.119 16:15:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.119 16:15:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.119 16:15:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.119 16:15:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.119 16:15:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49686172 kB' 'MemAvailable: 50103536 kB' 'Buffers: 1064 kB' 'Cached: 11189908 kB' 'SwapCached: 0 kB' 'Active: 11459808 kB' 'Inactive: 264060 kB' 'Active(anon): 10886188 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535812 kB' 'Mapped: 148628 kB' 'Shmem: 10353292 kB' 'KReclaimable: 184652 kB' 'Slab: 536628 kB' 'SReclaimable: 184652 kB' 'SUnreclaim: 351976 kB' 'KernelStack: 16320 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12329952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199944 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.119 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.119 16:15:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.120 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.120 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.121 16:15:02 -- setup/common.sh@33 -- # echo 0 00:03:54.121 16:15:02 -- setup/common.sh@33 -- # return 0 00:03:54.121 16:15:02 -- setup/hugepages.sh@99 -- # surp=0 00:03:54.121 16:15:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.121 16:15:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.121 16:15:02 -- setup/common.sh@18 -- # local node= 00:03:54.121 16:15:02 -- setup/common.sh@19 -- # local var val 00:03:54.121 16:15:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.121 16:15:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.121 16:15:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.121 16:15:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.121 16:15:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.121 16:15:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49685416 kB' 'MemAvailable: 50102780 kB' 'Buffers: 1064 kB' 'Cached: 11189908 kB' 'SwapCached: 0 kB' 'Active: 11458776 kB' 'Inactive: 264060 kB' 'Active(anon): 10885156 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535236 kB' 'Mapped: 148552 kB' 'Shmem: 10353292 kB' 'KReclaimable: 184652 kB' 'Slab: 536636 kB' 'SReclaimable: 184652 kB' 'SUnreclaim: 351984 kB' 'KernelStack: 16304 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12330980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199944 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:02 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.121 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.121 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.122 16:15:03 -- setup/common.sh@33 -- # echo 0 00:03:54.122 16:15:03 -- setup/common.sh@33 -- # return 0 00:03:54.122 16:15:03 -- setup/hugepages.sh@100 -- # resv=0 00:03:54.122 16:15:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.122 nr_hugepages=1024 00:03:54.122 16:15:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.122 resv_hugepages=0 00:03:54.122 16:15:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.122 surplus_hugepages=0 00:03:54.122 16:15:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.122 anon_hugepages=0 00:03:54.122 16:15:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.122 16:15:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.122 16:15:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.122 16:15:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.122 16:15:03 -- setup/common.sh@18 -- # local node= 00:03:54.122 16:15:03 -- setup/common.sh@19 -- # local var val 00:03:54.122 16:15:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.122 16:15:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.122 16:15:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.122 16:15:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.122 16:15:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.122 16:15:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65456540 kB' 'MemFree: 49685668 kB' 'MemAvailable: 50103032 kB' 'Buffers: 1064 kB' 'Cached: 11189936 kB' 'SwapCached: 0 kB' 'Active: 11461564 kB' 'Inactive: 264060 kB' 'Active(anon): 10887944 kB' 'Inactive(anon): 0 kB' 'Active(file): 573620 kB' 'Inactive(file): 264060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538040 kB' 'Mapped: 148552 kB' 'Shmem: 10353320 kB' 'KReclaimable: 184652 kB' 'Slab: 536620 kB' 'SReclaimable: 184652 kB' 'SUnreclaim: 351968 kB' 'KernelStack: 16288 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 40068296 kB' 'Committed_AS: 12333184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199928 kB' 'VmallocChunk: 0 kB' 'Percpu: 50560 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 703912 kB' 'DirectMap2M: 18894848 kB' 'DirectMap1G: 49283072 kB' 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.122 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.122 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.123 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.123 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.124 16:15:03 -- setup/common.sh@33 -- # echo 1024 00:03:54.124 16:15:03 -- setup/common.sh@33 -- # return 0 00:03:54.124 16:15:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.124 16:15:03 -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.124 16:15:03 -- setup/hugepages.sh@27 -- # local node 00:03:54.124 16:15:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.124 16:15:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.124 16:15:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.124 16:15:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:54.124 16:15:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.124 16:15:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.124 16:15:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.124 16:15:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.124 16:15:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.124 16:15:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.124 16:15:03 -- setup/common.sh@18 -- # local node=0 00:03:54.124 16:15:03 -- setup/common.sh@19 -- # local var val 00:03:54.124 16:15:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:54.124 16:15:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.124 16:15:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.124 16:15:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.124 16:15:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.124 16:15:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634076 kB' 'MemFree: 21946936 kB' 'MemUsed: 10687140 kB' 'SwapCached: 0 kB' 'Active: 7286356 kB' 'Inactive: 65060 kB' 'Active(anon): 6908132 kB' 'Inactive(anon): 0 kB' 'Active(file): 378224 kB' 'Inactive(file): 65060 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7184328 kB' 'Mapped: 71652 kB' 'AnonPages: 170380 kB' 'Shmem: 6741044 kB' 'KernelStack: 8664 kB' 'PageTables: 2852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87364 kB' 'Slab: 290496 kB' 'SReclaimable: 87364 kB' 'SUnreclaim: 203132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.124 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.124 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # continue 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:54.125 16:15:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:54.125 16:15:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.125 16:15:03 -- setup/common.sh@33 -- # echo 0 00:03:54.125 16:15:03 -- setup/common.sh@33 -- # return 0 00:03:54.125 16:15:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.125 16:15:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.125 16:15:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.125 16:15:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.125 16:15:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.125 node0=1024 expecting 1024 00:03:54.125 16:15:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.125 00:03:54.125 real 0m6.673s 00:03:54.125 user 0m2.400s 00:03:54.125 sys 0m4.233s 00:03:54.125 16:15:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:54.125 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:03:54.125 ************************************ 00:03:54.125 END TEST no_shrink_alloc 00:03:54.125 ************************************ 00:03:54.125 16:15:03 -- setup/hugepages.sh@217 -- # clear_hp 00:03:54.125 16:15:03 -- setup/hugepages.sh@37 -- # local node hp 00:03:54.125 16:15:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.125 16:15:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.125 16:15:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.125 16:15:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.125 16:15:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.125 16:15:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:54.125 16:15:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.125 16:15:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.125 16:15:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:54.125 16:15:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:54.125 16:15:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:54.125 16:15:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:54.125 00:03:54.125 real 0m25.031s 00:03:54.125 user 0m9.153s 00:03:54.125 sys 0m15.727s 00:03:54.125 16:15:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:54.125 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:03:54.125 ************************************ 00:03:54.125 END TEST hugepages 00:03:54.125 ************************************ 00:03:54.385 16:15:03 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:54.385 16:15:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:54.385 16:15:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:54.385 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:03:54.385 ************************************ 00:03:54.385 START TEST driver 00:03:54.385 ************************************ 00:03:54.385 16:15:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:03:54.385 * Looking for test storage... 00:03:54.385 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:54.385 16:15:03 -- setup/driver.sh@68 -- # setup reset 00:03:54.385 16:15:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.385 16:15:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.661 16:15:08 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:59.661 16:15:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.661 16:15:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.661 16:15:08 -- common/autotest_common.sh@10 -- # set +x 00:03:59.661 ************************************ 00:03:59.661 START TEST guess_driver 00:03:59.661 ************************************ 00:03:59.661 16:15:08 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:59.661 16:15:08 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:59.661 16:15:08 -- setup/driver.sh@47 -- # local fail=0 00:03:59.661 16:15:08 -- setup/driver.sh@49 -- # pick_driver 00:03:59.661 16:15:08 -- setup/driver.sh@36 -- # vfio 00:03:59.661 16:15:08 -- setup/driver.sh@21 -- # local iommu_grups 00:03:59.661 16:15:08 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:59.661 16:15:08 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:59.661 16:15:08 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:59.661 16:15:08 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:59.661 16:15:08 -- setup/driver.sh@29 -- # (( 167 > 0 )) 00:03:59.661 16:15:08 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:59.661 16:15:08 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:59.661 16:15:08 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:59.661 16:15:08 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:59.661 16:15:08 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:59.661 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:59.661 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:59.661 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:59.661 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:59.661 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:59.661 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:59.661 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:59.661 16:15:08 -- setup/driver.sh@30 -- # return 0 00:03:59.661 16:15:08 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:59.661 16:15:08 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:59.661 16:15:08 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:59.661 16:15:08 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:59.661 Looking for driver=vfio-pci 00:03:59.661 16:15:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:59.661 16:15:08 -- setup/driver.sh@45 -- # setup output config 00:03:59.661 16:15:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.661 16:15:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:03.859 16:15:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:03.859 16:15:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:03.859 16:15:12 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:03.859 16:15:12 -- setup/driver.sh@65 -- # setup reset 00:04:03.859 16:15:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:03.859 16:15:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:09.137 00:04:09.137 real 0m8.811s 00:04:09.137 user 0m2.792s 00:04:09.137 sys 0m5.251s 00:04:09.137 16:15:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.137 16:15:17 -- common/autotest_common.sh@10 -- # set +x 00:04:09.137 ************************************ 00:04:09.137 END TEST guess_driver 00:04:09.137 ************************************ 00:04:09.137 00:04:09.137 real 0m14.096s 00:04:09.137 user 0m4.350s 00:04:09.137 sys 0m8.145s 00:04:09.137 16:15:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:09.137 16:15:17 -- common/autotest_common.sh@10 -- # set +x 00:04:09.137 ************************************ 00:04:09.137 END TEST driver 00:04:09.137 ************************************ 00:04:09.137 16:15:17 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:09.137 16:15:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.137 16:15:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.137 16:15:17 -- common/autotest_common.sh@10 -- # set +x 00:04:09.137 ************************************ 00:04:09.137 START TEST devices 00:04:09.137 ************************************ 00:04:09.137 16:15:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:09.137 * Looking for test storage... 00:04:09.137 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:09.137 16:15:17 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:09.137 16:15:17 -- setup/devices.sh@192 -- # setup reset 00:04:09.137 16:15:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.137 16:15:17 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.337 16:15:21 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:13.337 16:15:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:13.337 16:15:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:13.337 16:15:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:13.337 16:15:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:13.337 16:15:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:13.337 16:15:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:13.337 16:15:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.337 16:15:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:13.337 16:15:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:13.337 16:15:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:13.337 16:15:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:13.337 16:15:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:13.337 16:15:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:13.337 16:15:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:13.337 16:15:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:13.337 16:15:21 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:13.337 16:15:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:13.337 16:15:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:13.337 16:15:21 -- setup/devices.sh@196 -- # blocks=() 00:04:13.337 16:15:21 -- setup/devices.sh@196 -- # declare -a blocks 00:04:13.337 16:15:21 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:13.337 16:15:21 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:13.337 16:15:21 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:13.337 16:15:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.337 16:15:21 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:13.337 16:15:21 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:13.337 16:15:21 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:13.337 16:15:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:13.337 16:15:21 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:13.337 16:15:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:13.337 No valid GPT data, bailing 00:04:13.337 16:15:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.337 16:15:21 -- scripts/common.sh@391 -- # pt= 00:04:13.337 16:15:21 -- scripts/common.sh@392 -- # return 1 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:13.337 16:15:21 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:13.337 16:15:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:13.337 16:15:21 -- setup/common.sh@80 -- # echo 1920383410176 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:13.337 16:15:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.337 16:15:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:13.337 16:15:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.337 16:15:21 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:13.337 16:15:21 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:13.337 16:15:21 -- setup/devices.sh@202 -- # pci=0000:af:00.0 00:04:13.337 16:15:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\a\f\:\0\0\.\0* ]] 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:13.337 16:15:21 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:13.337 16:15:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:04:13.337 No valid GPT data, bailing 00:04:13.337 16:15:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:13.337 16:15:21 -- scripts/common.sh@391 -- # pt= 00:04:13.337 16:15:21 -- scripts/common.sh@392 -- # return 1 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:13.337 16:15:21 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:13.337 16:15:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:13.337 16:15:21 -- setup/common.sh@80 -- # echo 375083606016 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:04:13.337 16:15:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.337 16:15:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:af:00.0 00:04:13.337 16:15:21 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:13.337 16:15:21 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:13.337 16:15:21 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:13.337 16:15:21 -- setup/devices.sh@202 -- # pci=0000:b0:00.0 00:04:13.337 16:15:21 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\b\0\:\0\0\.\0* ]] 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:13.337 16:15:21 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:13.337 16:15:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:04:13.337 No valid GPT data, bailing 00:04:13.337 16:15:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:13.337 16:15:21 -- scripts/common.sh@391 -- # pt= 00:04:13.337 16:15:21 -- scripts/common.sh@392 -- # return 1 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:13.337 16:15:21 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:13.337 16:15:21 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:13.337 16:15:21 -- setup/common.sh@80 -- # echo 375083606016 00:04:13.337 16:15:21 -- setup/devices.sh@204 -- # (( 375083606016 >= min_disk_size )) 00:04:13.337 16:15:21 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:13.337 16:15:21 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:b0:00.0 00:04:13.337 16:15:21 -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:04:13.337 16:15:21 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:13.337 16:15:21 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:13.337 16:15:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.337 16:15:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.337 16:15:21 -- common/autotest_common.sh@10 -- # set +x 00:04:13.337 ************************************ 00:04:13.337 START TEST nvme_mount 00:04:13.337 ************************************ 00:04:13.337 16:15:21 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:13.337 16:15:21 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:13.337 16:15:21 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:13.337 16:15:21 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.337 16:15:21 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.337 16:15:21 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:13.337 16:15:21 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:13.337 16:15:21 -- setup/common.sh@40 -- # local part_no=1 00:04:13.337 16:15:21 -- setup/common.sh@41 -- # local size=1073741824 00:04:13.337 16:15:21 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:13.337 16:15:21 -- setup/common.sh@44 -- # parts=() 00:04:13.337 16:15:21 -- setup/common.sh@44 -- # local parts 00:04:13.337 16:15:21 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:13.337 16:15:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.337 16:15:21 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:13.337 16:15:21 -- setup/common.sh@46 -- # (( part++ )) 00:04:13.337 16:15:21 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:13.337 16:15:21 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:13.337 16:15:21 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:13.337 16:15:21 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:13.907 Creating new GPT entries in memory. 00:04:13.907 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:13.907 other utilities. 00:04:13.907 16:15:22 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:13.907 16:15:22 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:13.907 16:15:22 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:13.907 16:15:22 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:13.907 16:15:22 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:15.287 Creating new GPT entries in memory. 00:04:15.287 The operation has completed successfully. 00:04:15.287 16:15:23 -- setup/common.sh@57 -- # (( part++ )) 00:04:15.287 16:15:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.287 16:15:23 -- setup/common.sh@62 -- # wait 312543 00:04:15.287 16:15:23 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.287 16:15:23 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:15.287 16:15:23 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.287 16:15:23 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:15.287 16:15:23 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:15.287 16:15:23 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.287 16:15:23 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.287 16:15:23 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:15.287 16:15:23 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:15.287 16:15:23 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.287 16:15:23 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.287 16:15:23 -- setup/devices.sh@53 -- # local found=0 00:04:15.287 16:15:23 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:15.287 16:15:23 -- setup/devices.sh@56 -- # : 00:04:15.287 16:15:23 -- setup/devices.sh@59 -- # local pci status 00:04:15.287 16:15:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.287 16:15:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:15.287 16:15:23 -- setup/devices.sh@47 -- # setup output config 00:04:15.287 16:15:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.287 16:15:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:18.584 16:15:26 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:18.584 16:15:26 -- setup/devices.sh@63 -- # found=1 00:04:18.584 16:15:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:26 -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.584 16:15:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:18.584 16:15:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:18.584 16:15:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.584 16:15:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.584 16:15:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.584 16:15:27 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:18.584 16:15:27 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.584 16:15:27 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.584 16:15:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:18.584 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:18.584 16:15:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:18.584 16:15:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:18.844 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:18.844 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:18.844 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:18.844 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:18.844 16:15:27 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:18.844 16:15:27 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:18.844 16:15:27 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.844 16:15:27 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:18.844 16:15:27 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:18.844 16:15:27 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.844 16:15:27 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.844 16:15:27 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:18.844 16:15:27 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:18.844 16:15:27 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.844 16:15:27 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.844 16:15:27 -- setup/devices.sh@53 -- # local found=0 00:04:18.844 16:15:27 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.844 16:15:27 -- setup/devices.sh@56 -- # : 00:04:18.844 16:15:27 -- setup/devices.sh@59 -- # local pci status 00:04:18.844 16:15:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.844 16:15:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:18.844 16:15:27 -- setup/devices.sh@47 -- # setup output config 00:04:18.844 16:15:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.845 16:15:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:22.138 16:15:30 -- setup/devices.sh@63 -- # found=1 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:22.138 16:15:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.138 16:15:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:22.138 16:15:30 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:22.138 16:15:30 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.139 16:15:30 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:22.139 16:15:30 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.139 16:15:30 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.139 16:15:31 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:22.139 16:15:31 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:22.139 16:15:31 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:22.139 16:15:31 -- setup/devices.sh@50 -- # local mount_point= 00:04:22.139 16:15:31 -- setup/devices.sh@51 -- # local test_file= 00:04:22.139 16:15:31 -- setup/devices.sh@53 -- # local found=0 00:04:22.139 16:15:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:22.139 16:15:31 -- setup/devices.sh@59 -- # local pci status 00:04:22.139 16:15:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.139 16:15:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:22.139 16:15:31 -- setup/devices.sh@47 -- # setup output config 00:04:22.139 16:15:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.139 16:15:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:25.429 16:15:33 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:33 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:25.429 16:15:33 -- setup/devices.sh@63 -- # found=1 00:04:25.429 16:15:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:33 -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:25.429 16:15:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:25.429 16:15:34 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:25.429 16:15:34 -- setup/devices.sh@68 -- # return 0 00:04:25.429 16:15:34 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:25.429 16:15:34 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.429 16:15:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:25.429 16:15:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:25.429 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:25.429 00:04:25.429 real 0m12.588s 00:04:25.429 user 0m3.704s 00:04:25.429 sys 0m6.734s 00:04:25.429 16:15:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.429 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:25.429 ************************************ 00:04:25.429 END TEST nvme_mount 00:04:25.429 ************************************ 00:04:25.688 16:15:34 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:25.688 16:15:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.688 16:15:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.688 16:15:34 -- common/autotest_common.sh@10 -- # set +x 00:04:25.688 ************************************ 00:04:25.688 START TEST dm_mount 00:04:25.688 ************************************ 00:04:25.688 16:15:34 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:25.688 16:15:34 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:25.688 16:15:34 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:25.688 16:15:34 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:25.688 16:15:34 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:25.688 16:15:34 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.688 16:15:34 -- setup/common.sh@40 -- # local part_no=2 00:04:25.688 16:15:34 -- setup/common.sh@41 -- # local size=1073741824 00:04:25.688 16:15:34 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.688 16:15:34 -- setup/common.sh@44 -- # parts=() 00:04:25.688 16:15:34 -- setup/common.sh@44 -- # local parts 00:04:25.688 16:15:34 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.688 16:15:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.688 16:15:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.688 16:15:34 -- setup/common.sh@46 -- # (( part++ )) 00:04:25.688 16:15:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.688 16:15:34 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.688 16:15:34 -- setup/common.sh@46 -- # (( part++ )) 00:04:25.688 16:15:34 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.688 16:15:34 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.688 16:15:34 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.688 16:15:34 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:27.067 Creating new GPT entries in memory. 00:04:27.067 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:27.067 other utilities. 00:04:27.067 16:15:35 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:27.067 16:15:35 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.067 16:15:35 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:27.067 16:15:35 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:27.067 16:15:35 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.006 Creating new GPT entries in memory. 00:04:28.006 The operation has completed successfully. 00:04:28.006 16:15:36 -- setup/common.sh@57 -- # (( part++ )) 00:04:28.006 16:15:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.006 16:15:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.006 16:15:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.006 16:15:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:28.944 The operation has completed successfully. 00:04:28.944 16:15:37 -- setup/common.sh@57 -- # (( part++ )) 00:04:28.944 16:15:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.944 16:15:37 -- setup/common.sh@62 -- # wait 316607 00:04:28.944 16:15:37 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:28.944 16:15:37 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.944 16:15:37 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.944 16:15:37 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:28.944 16:15:37 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:28.944 16:15:37 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.944 16:15:37 -- setup/devices.sh@161 -- # break 00:04:28.944 16:15:37 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.944 16:15:37 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:28.944 16:15:37 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:28.944 16:15:37 -- setup/devices.sh@166 -- # dm=dm-0 00:04:28.944 16:15:37 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:28.944 16:15:37 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:28.944 16:15:37 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.944 16:15:37 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:28.944 16:15:37 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.944 16:15:37 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:28.944 16:15:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:28.944 16:15:37 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.944 16:15:37 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.944 16:15:37 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:28.944 16:15:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:28.944 16:15:37 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:28.944 16:15:37 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:28.944 16:15:37 -- setup/devices.sh@53 -- # local found=0 00:04:28.945 16:15:37 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:28.945 16:15:37 -- setup/devices.sh@56 -- # : 00:04:28.945 16:15:37 -- setup/devices.sh@59 -- # local pci status 00:04:28.945 16:15:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.945 16:15:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:28.945 16:15:37 -- setup/devices.sh@47 -- # setup output config 00:04:28.945 16:15:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.945 16:15:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:32.237 16:15:40 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:32.237 16:15:40 -- setup/devices.sh@63 -- # found=1 00:04:32.237 16:15:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:40 -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.237 16:15:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:32.237 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.497 16:15:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:32.497 16:15:41 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:32.497 16:15:41 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:32.497 16:15:41 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:32.497 16:15:41 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:32.497 16:15:41 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:32.497 16:15:41 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:32.497 16:15:41 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:32.497 16:15:41 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:32.497 16:15:41 -- setup/devices.sh@50 -- # local mount_point= 00:04:32.497 16:15:41 -- setup/devices.sh@51 -- # local test_file= 00:04:32.497 16:15:41 -- setup/devices.sh@53 -- # local found=0 00:04:32.497 16:15:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:32.497 16:15:41 -- setup/devices.sh@59 -- # local pci status 00:04:32.497 16:15:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.497 16:15:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:32.497 16:15:41 -- setup/devices.sh@47 -- # setup output config 00:04:32.497 16:15:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.497 16:15:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:35.794 16:15:44 -- setup/devices.sh@63 -- # found=1 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:af:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:b0:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.794 16:15:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.794 16:15:44 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:35.794 16:15:44 -- setup/devices.sh@68 -- # return 0 00:04:35.794 16:15:44 -- setup/devices.sh@187 -- # cleanup_dm 00:04:35.794 16:15:44 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:35.794 16:15:44 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.794 16:15:44 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:35.794 16:15:44 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:35.794 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.794 16:15:44 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:35.794 16:15:44 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:35.794 00:04:35.794 real 0m10.142s 00:04:35.794 user 0m2.599s 00:04:35.794 sys 0m4.584s 00:04:35.794 16:15:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:35.794 16:15:44 -- common/autotest_common.sh@10 -- # set +x 00:04:35.794 ************************************ 00:04:35.794 END TEST dm_mount 00:04:35.794 ************************************ 00:04:36.055 16:15:44 -- setup/devices.sh@1 -- # cleanup 00:04:36.055 16:15:44 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:36.056 16:15:44 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:36.056 16:15:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.056 16:15:44 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:36.056 16:15:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.056 16:15:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.315 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:36.315 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:36.315 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:36.315 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:36.315 16:15:45 -- setup/devices.sh@12 -- # cleanup_dm 00:04:36.315 16:15:45 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:36.315 16:15:45 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:36.315 16:15:45 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.315 16:15:45 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:36.316 16:15:45 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.316 16:15:45 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:36.316 00:04:36.316 real 0m27.510s 00:04:36.316 user 0m7.964s 00:04:36.316 sys 0m14.283s 00:04:36.316 16:15:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.316 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.316 ************************************ 00:04:36.316 END TEST devices 00:04:36.316 ************************************ 00:04:36.316 00:04:36.316 real 1m32.264s 00:04:36.316 user 0m29.377s 00:04:36.316 sys 0m53.333s 00:04:36.316 16:15:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.316 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:04:36.316 ************************************ 00:04:36.316 END TEST setup.sh 00:04:36.316 ************************************ 00:04:36.316 16:15:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:39.607 Hugepages 00:04:39.607 node hugesize free / total 00:04:39.607 node0 1048576kB 0 / 0 00:04:39.607 node0 2048kB 2048 / 2048 00:04:39.607 node1 1048576kB 0 / 0 00:04:39.607 node1 2048kB 0 / 0 00:04:39.607 00:04:39.607 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.607 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:39.607 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:39.607 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:39.607 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:39.607 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:39.607 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:39.607 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:39.607 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:39.607 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:39.607 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:39.607 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:39.607 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:39.607 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:39.607 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:39.607 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:39.607 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:39.607 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:39.607 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:04:39.607 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:04:39.607 16:15:48 -- spdk/autotest.sh@130 -- # uname -s 00:04:39.607 16:15:48 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:39.607 16:15:48 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:39.607 16:15:48 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:42.905 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:42.905 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:04:42.905 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.905 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:44.813 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:44.813 16:15:53 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:45.752 16:15:54 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:45.752 16:15:54 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:45.752 16:15:54 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.752 16:15:54 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:45.752 16:15:54 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:45.752 16:15:54 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:45.752 16:15:54 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.752 16:15:54 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:45.752 16:15:54 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:45.752 16:15:54 -- common/autotest_common.sh@1501 -- # (( 3 == 0 )) 00:04:45.752 16:15:54 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:04:45.752 16:15:54 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.042 Waiting for block devices as requested 00:04:49.042 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:04:49.042 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:04:49.042 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:49.301 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:49.301 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:49.301 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:49.560 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:49.560 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:49.560 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:49.560 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:49.820 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:04:49.820 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:49.820 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:50.080 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:50.080 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:50.080 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:50.339 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:50.339 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:50.339 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:50.600 16:15:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:50.600 16:15:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # grep 0000:5e:00.0/nvme/nvme 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:50.600 16:15:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:50.600 16:15:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:50.600 16:15:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:50.600 16:15:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1543 -- # continue 00:04:50.600 16:15:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:50.600 16:15:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:af:00.0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # grep 0000:af:00.0/nvme/nvme 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:04:50.600 16:15:59 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:04:50.600 16:15:59 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:04:50.600 16:15:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:50.600 16:15:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x7' 00:04:50.600 16:15:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1534 -- # [[ 0 -ne 0 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:50.600 16:15:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:b0:00.0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # grep 0000:b0:00.0/nvme/nvme 00:04:50.600 16:15:59 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:50.600 16:15:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x7' 00:04:50.600 16:15:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=0 00:04:50.600 16:15:59 -- common/autotest_common.sh@1534 -- # [[ 0 -ne 0 ]] 00:04:50.600 16:15:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:50.600 16:15:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:50.600 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.600 16:15:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:50.600 16:15:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:50.600 16:15:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.600 16:15:59 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:53.895 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:04:53.895 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:04:53.895 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:04:53.895 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:53.895 16:16:02 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:53.895 16:16:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:53.895 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:04:53.895 16:16:02 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:53.895 16:16:02 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:53.895 16:16:02 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:53.895 16:16:02 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:53.895 16:16:02 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:53.895 16:16:02 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:53.895 16:16:02 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:53.895 16:16:02 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:53.895 16:16:02 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:53.895 16:16:02 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:53.895 16:16:02 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:54.155 16:16:02 -- common/autotest_common.sh@1501 -- # (( 3 == 0 )) 00:04:54.155 16:16:02 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:04:54.155 16:16:02 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:54.155 16:16:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:54.155 16:16:02 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:54.155 16:16:02 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:54.155 16:16:02 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:54.155 16:16:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:af:00.0/device 00:04:54.155 16:16:02 -- common/autotest_common.sh@1566 -- # device=0x2701 00:04:54.155 16:16:02 -- common/autotest_common.sh@1567 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:04:54.155 16:16:02 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:54.155 16:16:02 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:b0:00.0/device 00:04:54.155 16:16:02 -- common/autotest_common.sh@1566 -- # device=0x2701 00:04:54.155 16:16:02 -- common/autotest_common.sh@1567 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:04:54.155 16:16:02 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:54.155 16:16:02 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:54.155 16:16:02 -- common/autotest_common.sh@1579 -- # return 0 00:04:54.155 16:16:02 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:54.155 16:16:02 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:54.155 16:16:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:54.155 16:16:02 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:54.155 16:16:02 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:54.155 16:16:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:54.155 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.155 16:16:02 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:54.155 16:16:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.155 16:16:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.155 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.155 ************************************ 00:04:54.155 START TEST env 00:04:54.155 ************************************ 00:04:54.155 16:16:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:54.414 * Looking for test storage... 00:04:54.414 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:54.414 16:16:03 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:54.414 16:16:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.414 16:16:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.414 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:04:54.674 ************************************ 00:04:54.674 START TEST env_memory 00:04:54.674 ************************************ 00:04:54.674 16:16:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:54.674 00:04:54.674 00:04:54.674 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.674 http://cunit.sourceforge.net/ 00:04:54.674 00:04:54.674 00:04:54.674 Suite: memory 00:04:54.674 Test: alloc and free memory map ...[2024-04-26 16:16:03.492781] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:54.674 passed 00:04:54.674 Test: mem map translation ...[2024-04-26 16:16:03.512208] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:54.674 [2024-04-26 16:16:03.512226] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:54.674 [2024-04-26 16:16:03.512267] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:54.674 [2024-04-26 16:16:03.512276] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:54.674 passed 00:04:54.674 Test: mem map registration ...[2024-04-26 16:16:03.549023] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:54.674 [2024-04-26 16:16:03.549041] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:54.674 passed 00:04:54.674 Test: mem map adjacent registrations ...passed 00:04:54.674 00:04:54.674 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.674 suites 1 1 n/a 0 0 00:04:54.674 tests 4 4 4 0 0 00:04:54.674 asserts 152 152 152 0 n/a 00:04:54.674 00:04:54.674 Elapsed time = 0.137 seconds 00:04:54.674 00:04:54.674 real 0m0.151s 00:04:54.674 user 0m0.140s 00:04:54.674 sys 0m0.011s 00:04:54.674 16:16:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:54.674 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:04:54.674 ************************************ 00:04:54.674 END TEST env_memory 00:04:54.674 ************************************ 00:04:54.674 16:16:03 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:54.674 16:16:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.674 16:16:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.674 16:16:03 -- common/autotest_common.sh@10 -- # set +x 00:04:54.934 ************************************ 00:04:54.934 START TEST env_vtophys 00:04:54.934 ************************************ 00:04:54.934 16:16:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:54.934 EAL: lib.eal log level changed from notice to debug 00:04:54.934 EAL: Detected lcore 0 as core 0 on socket 0 00:04:54.934 EAL: Detected lcore 1 as core 1 on socket 0 00:04:54.934 EAL: Detected lcore 2 as core 2 on socket 0 00:04:54.934 EAL: Detected lcore 3 as core 3 on socket 0 00:04:54.934 EAL: Detected lcore 4 as core 4 on socket 0 00:04:54.934 EAL: Detected lcore 5 as core 8 on socket 0 00:04:54.934 EAL: Detected lcore 6 as core 9 on socket 0 00:04:54.934 EAL: Detected lcore 7 as core 10 on socket 0 00:04:54.934 EAL: Detected lcore 8 as core 11 on socket 0 00:04:54.934 EAL: Detected lcore 9 as core 16 on socket 0 00:04:54.934 EAL: Detected lcore 10 as core 17 on socket 0 00:04:54.934 EAL: Detected lcore 11 as core 18 on socket 0 00:04:54.934 EAL: Detected lcore 12 as core 19 on socket 0 00:04:54.934 EAL: Detected lcore 13 as core 20 on socket 0 00:04:54.934 EAL: Detected lcore 14 as core 24 on socket 0 00:04:54.934 EAL: Detected lcore 15 as core 25 on socket 0 00:04:54.934 EAL: Detected lcore 16 as core 26 on socket 0 00:04:54.934 EAL: Detected lcore 17 as core 27 on socket 0 00:04:54.934 EAL: Detected lcore 18 as core 0 on socket 1 00:04:54.934 EAL: Detected lcore 19 as core 1 on socket 1 00:04:54.934 EAL: Detected lcore 20 as core 2 on socket 1 00:04:54.934 EAL: Detected lcore 21 as core 3 on socket 1 00:04:54.934 EAL: Detected lcore 22 as core 4 on socket 1 00:04:54.934 EAL: Detected lcore 23 as core 8 on socket 1 00:04:54.934 EAL: Detected lcore 24 as core 9 on socket 1 00:04:54.934 EAL: Detected lcore 25 as core 10 on socket 1 00:04:54.935 EAL: Detected lcore 26 as core 11 on socket 1 00:04:54.935 EAL: Detected lcore 27 as core 16 on socket 1 00:04:54.935 EAL: Detected lcore 28 as core 17 on socket 1 00:04:54.935 EAL: Detected lcore 29 as core 18 on socket 1 00:04:54.935 EAL: Detected lcore 30 as core 19 on socket 1 00:04:54.935 EAL: Detected lcore 31 as core 20 on socket 1 00:04:54.935 EAL: Detected lcore 32 as core 24 on socket 1 00:04:54.935 EAL: Detected lcore 33 as core 25 on socket 1 00:04:54.935 EAL: Detected lcore 34 as core 26 on socket 1 00:04:54.935 EAL: Detected lcore 35 as core 27 on socket 1 00:04:54.935 EAL: Detected lcore 36 as core 0 on socket 0 00:04:54.935 EAL: Detected lcore 37 as core 1 on socket 0 00:04:54.935 EAL: Detected lcore 38 as core 2 on socket 0 00:04:54.935 EAL: Detected lcore 39 as core 3 on socket 0 00:04:54.935 EAL: Detected lcore 40 as core 4 on socket 0 00:04:54.935 EAL: Detected lcore 41 as core 8 on socket 0 00:04:54.935 EAL: Detected lcore 42 as core 9 on socket 0 00:04:54.935 EAL: Detected lcore 43 as core 10 on socket 0 00:04:54.935 EAL: Detected lcore 44 as core 11 on socket 0 00:04:54.935 EAL: Detected lcore 45 as core 16 on socket 0 00:04:54.935 EAL: Detected lcore 46 as core 17 on socket 0 00:04:54.935 EAL: Detected lcore 47 as core 18 on socket 0 00:04:54.935 EAL: Detected lcore 48 as core 19 on socket 0 00:04:54.935 EAL: Detected lcore 49 as core 20 on socket 0 00:04:54.935 EAL: Detected lcore 50 as core 24 on socket 0 00:04:54.935 EAL: Detected lcore 51 as core 25 on socket 0 00:04:54.935 EAL: Detected lcore 52 as core 26 on socket 0 00:04:54.935 EAL: Detected lcore 53 as core 27 on socket 0 00:04:54.935 EAL: Detected lcore 54 as core 0 on socket 1 00:04:54.935 EAL: Detected lcore 55 as core 1 on socket 1 00:04:54.935 EAL: Detected lcore 56 as core 2 on socket 1 00:04:54.935 EAL: Detected lcore 57 as core 3 on socket 1 00:04:54.935 EAL: Detected lcore 58 as core 4 on socket 1 00:04:54.935 EAL: Detected lcore 59 as core 8 on socket 1 00:04:54.935 EAL: Detected lcore 60 as core 9 on socket 1 00:04:54.935 EAL: Detected lcore 61 as core 10 on socket 1 00:04:54.935 EAL: Detected lcore 62 as core 11 on socket 1 00:04:54.935 EAL: Detected lcore 63 as core 16 on socket 1 00:04:54.935 EAL: Detected lcore 64 as core 17 on socket 1 00:04:54.935 EAL: Detected lcore 65 as core 18 on socket 1 00:04:54.935 EAL: Detected lcore 66 as core 19 on socket 1 00:04:54.935 EAL: Detected lcore 67 as core 20 on socket 1 00:04:54.935 EAL: Detected lcore 68 as core 24 on socket 1 00:04:54.935 EAL: Detected lcore 69 as core 25 on socket 1 00:04:54.935 EAL: Detected lcore 70 as core 26 on socket 1 00:04:54.935 EAL: Detected lcore 71 as core 27 on socket 1 00:04:54.935 EAL: Maximum logical cores by configuration: 128 00:04:54.935 EAL: Detected CPU lcores: 72 00:04:54.935 EAL: Detected NUMA nodes: 2 00:04:54.935 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:54.935 EAL: Detected shared linkage of DPDK 00:04:54.935 EAL: No shared files mode enabled, IPC will be disabled 00:04:54.935 EAL: Bus pci wants IOVA as 'DC' 00:04:54.935 EAL: Buses did not request a specific IOVA mode. 00:04:54.935 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:54.935 EAL: Selected IOVA mode 'VA' 00:04:54.935 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.935 EAL: Probing VFIO support... 00:04:54.935 EAL: IOMMU type 1 (Type 1) is supported 00:04:54.935 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:54.935 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:54.935 EAL: VFIO support initialized 00:04:54.935 EAL: Ask a virtual area of 0x2e000 bytes 00:04:54.935 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:54.935 EAL: Setting up physically contiguous memory... 00:04:54.935 EAL: Setting maximum number of open files to 524288 00:04:54.935 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:54.935 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:54.935 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:54.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.935 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:54.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.935 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:54.935 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:54.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.935 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:54.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.935 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:54.935 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:54.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.935 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:54.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.935 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:54.935 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:54.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.935 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:54.935 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.935 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:54.935 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:54.935 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:54.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.935 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:54.935 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.935 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:54.935 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:54.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.935 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:54.935 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.935 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:54.935 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:54.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.935 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:54.935 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.935 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:54.935 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:54.935 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.935 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:54.935 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.935 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.935 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:54.935 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:54.935 EAL: Hugepages will be freed exactly as allocated. 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: TSC frequency is ~2300000 KHz 00:04:54.935 EAL: Main lcore 0 is ready (tid=7f549a601a00;cpuset=[0]) 00:04:54.935 EAL: Trying to obtain current memory policy. 00:04:54.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.935 EAL: Restoring previous memory policy: 0 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was expanded by 2MB 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:54.935 EAL: Mem event callback 'spdk:(nil)' registered 00:04:54.935 00:04:54.935 00:04:54.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.935 http://cunit.sourceforge.net/ 00:04:54.935 00:04:54.935 00:04:54.935 Suite: components_suite 00:04:54.935 Test: vtophys_malloc_test ...passed 00:04:54.935 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:54.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.935 EAL: Restoring previous memory policy: 4 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was expanded by 4MB 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was shrunk by 4MB 00:04:54.935 EAL: Trying to obtain current memory policy. 00:04:54.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.935 EAL: Restoring previous memory policy: 4 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was expanded by 6MB 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was shrunk by 6MB 00:04:54.935 EAL: Trying to obtain current memory policy. 00:04:54.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.935 EAL: Restoring previous memory policy: 4 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was expanded by 10MB 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was shrunk by 10MB 00:04:54.935 EAL: Trying to obtain current memory policy. 00:04:54.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.935 EAL: Restoring previous memory policy: 4 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was expanded by 18MB 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was shrunk by 18MB 00:04:54.935 EAL: Trying to obtain current memory policy. 00:04:54.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.935 EAL: Restoring previous memory policy: 4 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was expanded by 34MB 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was shrunk by 34MB 00:04:54.935 EAL: Trying to obtain current memory policy. 00:04:54.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.935 EAL: Restoring previous memory policy: 4 00:04:54.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.935 EAL: request: mp_malloc_sync 00:04:54.935 EAL: No shared files mode enabled, IPC is disabled 00:04:54.935 EAL: Heap on socket 0 was expanded by 66MB 00:04:55.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.195 EAL: request: mp_malloc_sync 00:04:55.195 EAL: No shared files mode enabled, IPC is disabled 00:04:55.195 EAL: Heap on socket 0 was shrunk by 66MB 00:04:55.195 EAL: Trying to obtain current memory policy. 00:04:55.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.195 EAL: Restoring previous memory policy: 4 00:04:55.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.195 EAL: request: mp_malloc_sync 00:04:55.195 EAL: No shared files mode enabled, IPC is disabled 00:04:55.195 EAL: Heap on socket 0 was expanded by 130MB 00:04:55.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.195 EAL: request: mp_malloc_sync 00:04:55.195 EAL: No shared files mode enabled, IPC is disabled 00:04:55.195 EAL: Heap on socket 0 was shrunk by 130MB 00:04:55.195 EAL: Trying to obtain current memory policy. 00:04:55.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.195 EAL: Restoring previous memory policy: 4 00:04:55.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.195 EAL: request: mp_malloc_sync 00:04:55.195 EAL: No shared files mode enabled, IPC is disabled 00:04:55.195 EAL: Heap on socket 0 was expanded by 258MB 00:04:55.195 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.195 EAL: request: mp_malloc_sync 00:04:55.195 EAL: No shared files mode enabled, IPC is disabled 00:04:55.195 EAL: Heap on socket 0 was shrunk by 258MB 00:04:55.195 EAL: Trying to obtain current memory policy. 00:04:55.195 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.454 EAL: Restoring previous memory policy: 4 00:04:55.454 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.454 EAL: request: mp_malloc_sync 00:04:55.454 EAL: No shared files mode enabled, IPC is disabled 00:04:55.454 EAL: Heap on socket 0 was expanded by 514MB 00:04:55.454 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.713 EAL: request: mp_malloc_sync 00:04:55.713 EAL: No shared files mode enabled, IPC is disabled 00:04:55.713 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.713 EAL: Trying to obtain current memory policy. 00:04:55.713 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.713 EAL: Restoring previous memory policy: 4 00:04:55.713 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.713 EAL: request: mp_malloc_sync 00:04:55.713 EAL: No shared files mode enabled, IPC is disabled 00:04:55.713 EAL: Heap on socket 0 was expanded by 1026MB 00:04:55.972 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.231 EAL: request: mp_malloc_sync 00:04:56.231 EAL: No shared files mode enabled, IPC is disabled 00:04:56.231 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.231 passed 00:04:56.231 00:04:56.231 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.231 suites 1 1 n/a 0 0 00:04:56.231 tests 2 2 2 0 0 00:04:56.231 asserts 497 497 497 0 n/a 00:04:56.231 00:04:56.231 Elapsed time = 1.130 seconds 00:04:56.231 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.231 EAL: request: mp_malloc_sync 00:04:56.231 EAL: No shared files mode enabled, IPC is disabled 00:04:56.231 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.231 EAL: No shared files mode enabled, IPC is disabled 00:04:56.231 EAL: No shared files mode enabled, IPC is disabled 00:04:56.231 EAL: No shared files mode enabled, IPC is disabled 00:04:56.231 00:04:56.231 real 0m1.259s 00:04:56.231 user 0m0.734s 00:04:56.231 sys 0m0.496s 00:04:56.231 16:16:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.231 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 ************************************ 00:04:56.231 END TEST env_vtophys 00:04:56.231 ************************************ 00:04:56.231 16:16:05 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:56.231 16:16:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:56.231 16:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.231 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:04:56.231 ************************************ 00:04:56.231 START TEST env_pci 00:04:56.231 ************************************ 00:04:56.231 16:16:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:56.490 00:04:56.490 00:04:56.490 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.490 http://cunit.sourceforge.net/ 00:04:56.490 00:04:56.490 00:04:56.490 Suite: pci 00:04:56.490 Test: pci_hook ...[2024-04-26 16:16:05.275314] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 325921 has claimed it 00:04:56.490 EAL: Cannot find device (10000:00:01.0) 00:04:56.490 EAL: Failed to attach device on primary process 00:04:56.490 passed 00:04:56.490 00:04:56.490 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.490 suites 1 1 n/a 0 0 00:04:56.491 tests 1 1 1 0 0 00:04:56.491 asserts 25 25 25 0 n/a 00:04:56.491 00:04:56.491 Elapsed time = 0.032 seconds 00:04:56.491 00:04:56.491 real 0m0.055s 00:04:56.491 user 0m0.015s 00:04:56.491 sys 0m0.040s 00:04:56.491 16:16:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:56.491 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:04:56.491 ************************************ 00:04:56.491 END TEST env_pci 00:04:56.491 ************************************ 00:04:56.491 16:16:05 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.491 16:16:05 -- env/env.sh@15 -- # uname 00:04:56.491 16:16:05 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.491 16:16:05 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.491 16:16:05 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.491 16:16:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:56.491 16:16:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:56.491 16:16:05 -- common/autotest_common.sh@10 -- # set +x 00:04:56.749 ************************************ 00:04:56.749 START TEST env_dpdk_post_init 00:04:56.749 ************************************ 00:04:56.749 16:16:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.749 EAL: Detected CPU lcores: 72 00:04:56.749 EAL: Detected NUMA nodes: 2 00:04:56.749 EAL: Detected shared linkage of DPDK 00:04:56.749 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.749 EAL: Selected IOVA mode 'VA' 00:04:56.749 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.749 EAL: VFIO support initialized 00:04:56.749 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.749 EAL: Using IOMMU type 1 (Type 1) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:56.749 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:57.008 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0) 00:04:57.009 EAL: Ignore mapping IO port bar(1) 00:04:57.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:57.009 EAL: Ignore mapping IO port bar(1) 00:04:57.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:57.009 EAL: Ignore mapping IO port bar(1) 00:04:57.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:57.009 EAL: Ignore mapping IO port bar(1) 00:04:57.009 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:57.267 EAL: Ignore mapping IO port bar(1) 00:04:57.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:57.267 EAL: Ignore mapping IO port bar(1) 00:04:57.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:57.267 EAL: Ignore mapping IO port bar(1) 00:04:57.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:57.267 EAL: Ignore mapping IO port bar(1) 00:04:57.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:57.527 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:af:00.0 (socket 1) 00:04:57.527 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:b0:00.0 (socket 1) 00:04:57.527 EAL: Releasing PCI mapped resource for 0000:b0:00.0 00:04:57.527 EAL: Calling pci_unmap_resource for 0000:b0:00.0 at 0x202001048000 00:04:57.786 EAL: Releasing PCI mapped resource for 0000:af:00.0 00:04:57.786 EAL: Calling pci_unmap_resource for 0000:af:00.0 at 0x202001044000 00:04:57.786 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:57.786 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:58.046 Starting DPDK initialization... 00:04:58.046 Starting SPDK post initialization... 00:04:58.046 SPDK NVMe probe 00:04:58.046 Attaching to 0000:5e:00.0 00:04:58.046 Attaching to 0000:af:00.0 00:04:58.046 Attaching to 0000:b0:00.0 00:04:58.046 Attached to 0000:af:00.0 00:04:58.046 Attached to 0000:b0:00.0 00:04:58.046 Attached to 0000:5e:00.0 00:04:58.046 Cleaning up... 00:04:58.046 00:04:58.046 real 0m1.345s 00:04:58.046 user 0m0.377s 00:04:58.046 sys 0m0.099s 00:04:58.046 16:16:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.046 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.046 ************************************ 00:04:58.046 END TEST env_dpdk_post_init 00:04:58.046 ************************************ 00:04:58.046 16:16:06 -- env/env.sh@26 -- # uname 00:04:58.046 16:16:06 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:58.046 16:16:06 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.046 16:16:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.046 16:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.046 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:04:58.306 ************************************ 00:04:58.306 START TEST env_mem_callbacks 00:04:58.306 ************************************ 00:04:58.306 16:16:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:58.306 EAL: Detected CPU lcores: 72 00:04:58.306 EAL: Detected NUMA nodes: 2 00:04:58.306 EAL: Detected shared linkage of DPDK 00:04:58.306 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:58.306 EAL: Selected IOVA mode 'VA' 00:04:58.306 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.306 EAL: VFIO support initialized 00:04:58.306 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:58.306 00:04:58.306 00:04:58.306 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.306 http://cunit.sourceforge.net/ 00:04:58.306 00:04:58.306 00:04:58.306 Suite: memory 00:04:58.306 Test: test ... 00:04:58.306 register 0x200000200000 2097152 00:04:58.306 malloc 3145728 00:04:58.306 register 0x200000400000 4194304 00:04:58.306 buf 0x200000500000 len 3145728 PASSED 00:04:58.306 malloc 64 00:04:58.306 buf 0x2000004fff40 len 64 PASSED 00:04:58.306 malloc 4194304 00:04:58.306 register 0x200000800000 6291456 00:04:58.306 buf 0x200000a00000 len 4194304 PASSED 00:04:58.306 free 0x200000500000 3145728 00:04:58.306 free 0x2000004fff40 64 00:04:58.306 unregister 0x200000400000 4194304 PASSED 00:04:58.306 free 0x200000a00000 4194304 00:04:58.306 unregister 0x200000800000 6291456 PASSED 00:04:58.306 malloc 8388608 00:04:58.306 register 0x200000400000 10485760 00:04:58.306 buf 0x200000600000 len 8388608 PASSED 00:04:58.306 free 0x200000600000 8388608 00:04:58.306 unregister 0x200000400000 10485760 PASSED 00:04:58.306 passed 00:04:58.306 00:04:58.306 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.306 suites 1 1 n/a 0 0 00:04:58.306 tests 1 1 1 0 0 00:04:58.306 asserts 15 15 15 0 n/a 00:04:58.306 00:04:58.306 Elapsed time = 0.006 seconds 00:04:58.306 00:04:58.306 real 0m0.066s 00:04:58.306 user 0m0.020s 00:04:58.306 sys 0m0.046s 00:04:58.306 16:16:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.306 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:58.306 ************************************ 00:04:58.306 END TEST env_mem_callbacks 00:04:58.306 ************************************ 00:04:58.306 00:04:58.306 real 0m4.021s 00:04:58.306 user 0m1.682s 00:04:58.306 sys 0m1.358s 00:04:58.306 16:16:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.306 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:58.306 ************************************ 00:04:58.306 END TEST env 00:04:58.306 ************************************ 00:04:58.306 16:16:07 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.306 16:16:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.306 16:16:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.306 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:58.566 ************************************ 00:04:58.566 START TEST rpc 00:04:58.566 ************************************ 00:04:58.566 16:16:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:58.566 * Looking for test storage... 00:04:58.566 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:58.566 16:16:07 -- rpc/rpc.sh@65 -- # spdk_pid=326418 00:04:58.566 16:16:07 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.566 16:16:07 -- rpc/rpc.sh@67 -- # waitforlisten 326418 00:04:58.566 16:16:07 -- common/autotest_common.sh@817 -- # '[' -z 326418 ']' 00:04:58.566 16:16:07 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:58.566 16:16:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.566 16:16:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:58.566 16:16:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.566 16:16:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:58.566 16:16:07 -- common/autotest_common.sh@10 -- # set +x 00:04:58.566 [2024-04-26 16:16:07.546279] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:04:58.566 [2024-04-26 16:16:07.546339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326418 ] 00:04:58.566 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.826 [2024-04-26 16:16:07.620577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.826 [2024-04-26 16:16:07.704748] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:58.826 [2024-04-26 16:16:07.704789] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 326418' to capture a snapshot of events at runtime. 00:04:58.826 [2024-04-26 16:16:07.704798] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:58.826 [2024-04-26 16:16:07.704821] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:58.826 [2024-04-26 16:16:07.704829] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid326418 for offline analysis/debug. 00:04:58.826 [2024-04-26 16:16:07.704859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.395 16:16:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:59.395 16:16:08 -- common/autotest_common.sh@850 -- # return 0 00:04:59.395 16:16:08 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:59.395 16:16:08 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:59.395 16:16:08 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:59.396 16:16:08 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:59.396 16:16:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.396 16:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.396 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.655 ************************************ 00:04:59.655 START TEST rpc_integrity 00:04:59.655 ************************************ 00:04:59.655 16:16:08 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:59.655 16:16:08 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.655 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.655 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.655 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.655 16:16:08 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.655 16:16:08 -- rpc/rpc.sh@13 -- # jq length 00:04:59.655 16:16:08 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.655 16:16:08 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.655 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.655 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.655 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.655 16:16:08 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:59.655 16:16:08 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.655 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.655 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.655 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.655 16:16:08 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.655 { 00:04:59.655 "name": "Malloc0", 00:04:59.655 "aliases": [ 00:04:59.655 "91637747-a796-4579-8ba1-2c49c042f4bc" 00:04:59.655 ], 00:04:59.655 "product_name": "Malloc disk", 00:04:59.655 "block_size": 512, 00:04:59.655 "num_blocks": 16384, 00:04:59.655 "uuid": "91637747-a796-4579-8ba1-2c49c042f4bc", 00:04:59.655 "assigned_rate_limits": { 00:04:59.655 "rw_ios_per_sec": 0, 00:04:59.655 "rw_mbytes_per_sec": 0, 00:04:59.655 "r_mbytes_per_sec": 0, 00:04:59.655 "w_mbytes_per_sec": 0 00:04:59.655 }, 00:04:59.655 "claimed": false, 00:04:59.655 "zoned": false, 00:04:59.655 "supported_io_types": { 00:04:59.655 "read": true, 00:04:59.655 "write": true, 00:04:59.655 "unmap": true, 00:04:59.655 "write_zeroes": true, 00:04:59.655 "flush": true, 00:04:59.655 "reset": true, 00:04:59.655 "compare": false, 00:04:59.655 "compare_and_write": false, 00:04:59.655 "abort": true, 00:04:59.655 "nvme_admin": false, 00:04:59.655 "nvme_io": false 00:04:59.655 }, 00:04:59.655 "memory_domains": [ 00:04:59.655 { 00:04:59.655 "dma_device_id": "system", 00:04:59.655 "dma_device_type": 1 00:04:59.655 }, 00:04:59.655 { 00:04:59.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.655 "dma_device_type": 2 00:04:59.655 } 00:04:59.655 ], 00:04:59.655 "driver_specific": {} 00:04:59.655 } 00:04:59.655 ]' 00:04:59.655 16:16:08 -- rpc/rpc.sh@17 -- # jq length 00:04:59.655 16:16:08 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.655 16:16:08 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:59.655 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.655 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.655 [2024-04-26 16:16:08.646003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:59.655 [2024-04-26 16:16:08.646034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.655 [2024-04-26 16:16:08.646049] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b74390 00:04:59.655 [2024-04-26 16:16:08.646057] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.655 [2024-04-26 16:16:08.647275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.655 [2024-04-26 16:16:08.647297] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.655 Passthru0 00:04:59.655 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.655 16:16:08 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.655 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.655 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.915 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.915 16:16:08 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.915 { 00:04:59.915 "name": "Malloc0", 00:04:59.915 "aliases": [ 00:04:59.915 "91637747-a796-4579-8ba1-2c49c042f4bc" 00:04:59.915 ], 00:04:59.915 "product_name": "Malloc disk", 00:04:59.915 "block_size": 512, 00:04:59.915 "num_blocks": 16384, 00:04:59.915 "uuid": "91637747-a796-4579-8ba1-2c49c042f4bc", 00:04:59.915 "assigned_rate_limits": { 00:04:59.915 "rw_ios_per_sec": 0, 00:04:59.915 "rw_mbytes_per_sec": 0, 00:04:59.915 "r_mbytes_per_sec": 0, 00:04:59.915 "w_mbytes_per_sec": 0 00:04:59.915 }, 00:04:59.915 "claimed": true, 00:04:59.915 "claim_type": "exclusive_write", 00:04:59.915 "zoned": false, 00:04:59.915 "supported_io_types": { 00:04:59.915 "read": true, 00:04:59.915 "write": true, 00:04:59.915 "unmap": true, 00:04:59.915 "write_zeroes": true, 00:04:59.915 "flush": true, 00:04:59.915 "reset": true, 00:04:59.915 "compare": false, 00:04:59.915 "compare_and_write": false, 00:04:59.915 "abort": true, 00:04:59.915 "nvme_admin": false, 00:04:59.915 "nvme_io": false 00:04:59.915 }, 00:04:59.915 "memory_domains": [ 00:04:59.915 { 00:04:59.915 "dma_device_id": "system", 00:04:59.915 "dma_device_type": 1 00:04:59.915 }, 00:04:59.915 { 00:04:59.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.915 "dma_device_type": 2 00:04:59.915 } 00:04:59.915 ], 00:04:59.915 "driver_specific": {} 00:04:59.915 }, 00:04:59.915 { 00:04:59.915 "name": "Passthru0", 00:04:59.915 "aliases": [ 00:04:59.915 "1b5098a9-d77b-56fc-99cf-2f0c72bdf280" 00:04:59.915 ], 00:04:59.915 "product_name": "passthru", 00:04:59.915 "block_size": 512, 00:04:59.915 "num_blocks": 16384, 00:04:59.915 "uuid": "1b5098a9-d77b-56fc-99cf-2f0c72bdf280", 00:04:59.915 "assigned_rate_limits": { 00:04:59.915 "rw_ios_per_sec": 0, 00:04:59.915 "rw_mbytes_per_sec": 0, 00:04:59.915 "r_mbytes_per_sec": 0, 00:04:59.915 "w_mbytes_per_sec": 0 00:04:59.915 }, 00:04:59.915 "claimed": false, 00:04:59.915 "zoned": false, 00:04:59.915 "supported_io_types": { 00:04:59.915 "read": true, 00:04:59.915 "write": true, 00:04:59.915 "unmap": true, 00:04:59.915 "write_zeroes": true, 00:04:59.915 "flush": true, 00:04:59.915 "reset": true, 00:04:59.915 "compare": false, 00:04:59.915 "compare_and_write": false, 00:04:59.915 "abort": true, 00:04:59.915 "nvme_admin": false, 00:04:59.915 "nvme_io": false 00:04:59.915 }, 00:04:59.915 "memory_domains": [ 00:04:59.915 { 00:04:59.915 "dma_device_id": "system", 00:04:59.915 "dma_device_type": 1 00:04:59.915 }, 00:04:59.915 { 00:04:59.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.915 "dma_device_type": 2 00:04:59.915 } 00:04:59.915 ], 00:04:59.915 "driver_specific": { 00:04:59.915 "passthru": { 00:04:59.915 "name": "Passthru0", 00:04:59.915 "base_bdev_name": "Malloc0" 00:04:59.915 } 00:04:59.915 } 00:04:59.915 } 00:04:59.915 ]' 00:04:59.915 16:16:08 -- rpc/rpc.sh@21 -- # jq length 00:04:59.915 16:16:08 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.915 16:16:08 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.915 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.915 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.915 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.915 16:16:08 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:59.915 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.915 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.915 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.915 16:16:08 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.915 16:16:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:59.915 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.915 16:16:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:59.915 16:16:08 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.915 16:16:08 -- rpc/rpc.sh@26 -- # jq length 00:04:59.915 16:16:08 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.915 00:04:59.915 real 0m0.288s 00:04:59.915 user 0m0.166s 00:04:59.915 sys 0m0.058s 00:04:59.915 16:16:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:59.915 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:04:59.915 ************************************ 00:04:59.915 END TEST rpc_integrity 00:04:59.915 ************************************ 00:04:59.915 16:16:08 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:59.915 16:16:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.915 16:16:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.915 16:16:08 -- common/autotest_common.sh@10 -- # set +x 00:05:00.175 ************************************ 00:05:00.175 START TEST rpc_plugins 00:05:00.175 ************************************ 00:05:00.175 16:16:09 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:00.175 16:16:09 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.175 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.175 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.175 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.175 16:16:09 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.175 16:16:09 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.175 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.175 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.175 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.175 16:16:09 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.175 { 00:05:00.175 "name": "Malloc1", 00:05:00.175 "aliases": [ 00:05:00.175 "5aba8ca0-87ba-4737-9fd8-9fab2978103c" 00:05:00.175 ], 00:05:00.175 "product_name": "Malloc disk", 00:05:00.175 "block_size": 4096, 00:05:00.175 "num_blocks": 256, 00:05:00.175 "uuid": "5aba8ca0-87ba-4737-9fd8-9fab2978103c", 00:05:00.175 "assigned_rate_limits": { 00:05:00.175 "rw_ios_per_sec": 0, 00:05:00.175 "rw_mbytes_per_sec": 0, 00:05:00.175 "r_mbytes_per_sec": 0, 00:05:00.175 "w_mbytes_per_sec": 0 00:05:00.175 }, 00:05:00.175 "claimed": false, 00:05:00.175 "zoned": false, 00:05:00.175 "supported_io_types": { 00:05:00.175 "read": true, 00:05:00.175 "write": true, 00:05:00.175 "unmap": true, 00:05:00.175 "write_zeroes": true, 00:05:00.175 "flush": true, 00:05:00.175 "reset": true, 00:05:00.175 "compare": false, 00:05:00.175 "compare_and_write": false, 00:05:00.175 "abort": true, 00:05:00.175 "nvme_admin": false, 00:05:00.175 "nvme_io": false 00:05:00.175 }, 00:05:00.175 "memory_domains": [ 00:05:00.175 { 00:05:00.175 "dma_device_id": "system", 00:05:00.175 "dma_device_type": 1 00:05:00.175 }, 00:05:00.175 { 00:05:00.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.175 "dma_device_type": 2 00:05:00.175 } 00:05:00.175 ], 00:05:00.175 "driver_specific": {} 00:05:00.175 } 00:05:00.175 ]' 00:05:00.175 16:16:09 -- rpc/rpc.sh@32 -- # jq length 00:05:00.175 16:16:09 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.175 16:16:09 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.175 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.175 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.175 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.175 16:16:09 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.175 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.175 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.175 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.175 16:16:09 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.175 16:16:09 -- rpc/rpc.sh@36 -- # jq length 00:05:00.175 16:16:09 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.175 00:05:00.175 real 0m0.145s 00:05:00.175 user 0m0.086s 00:05:00.175 sys 0m0.028s 00:05:00.175 16:16:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.175 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.175 ************************************ 00:05:00.175 END TEST rpc_plugins 00:05:00.175 ************************************ 00:05:00.175 16:16:09 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:00.175 16:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.175 16:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.175 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.434 ************************************ 00:05:00.434 START TEST rpc_trace_cmd_test 00:05:00.434 ************************************ 00:05:00.434 16:16:09 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:00.434 16:16:09 -- rpc/rpc.sh@40 -- # local info 00:05:00.434 16:16:09 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:00.434 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.434 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.434 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.434 16:16:09 -- rpc/rpc.sh@42 -- # info='{ 00:05:00.434 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid326418", 00:05:00.434 "tpoint_group_mask": "0x8", 00:05:00.434 "iscsi_conn": { 00:05:00.434 "mask": "0x2", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.434 }, 00:05:00.434 "scsi": { 00:05:00.434 "mask": "0x4", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.434 }, 00:05:00.434 "bdev": { 00:05:00.434 "mask": "0x8", 00:05:00.434 "tpoint_mask": "0xffffffffffffffff" 00:05:00.434 }, 00:05:00.434 "nvmf_rdma": { 00:05:00.434 "mask": "0x10", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.434 }, 00:05:00.434 "nvmf_tcp": { 00:05:00.434 "mask": "0x20", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.434 }, 00:05:00.434 "ftl": { 00:05:00.434 "mask": "0x40", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.434 }, 00:05:00.434 "blobfs": { 00:05:00.434 "mask": "0x80", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.434 }, 00:05:00.434 "dsa": { 00:05:00.434 "mask": "0x200", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.434 }, 00:05:00.434 "thread": { 00:05:00.434 "mask": "0x400", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.434 }, 00:05:00.434 "nvme_pcie": { 00:05:00.434 "mask": "0x800", 00:05:00.434 "tpoint_mask": "0x0" 00:05:00.435 }, 00:05:00.435 "iaa": { 00:05:00.435 "mask": "0x1000", 00:05:00.435 "tpoint_mask": "0x0" 00:05:00.435 }, 00:05:00.435 "nvme_tcp": { 00:05:00.435 "mask": "0x2000", 00:05:00.435 "tpoint_mask": "0x0" 00:05:00.435 }, 00:05:00.435 "bdev_nvme": { 00:05:00.435 "mask": "0x4000", 00:05:00.435 "tpoint_mask": "0x0" 00:05:00.435 }, 00:05:00.435 "sock": { 00:05:00.435 "mask": "0x8000", 00:05:00.435 "tpoint_mask": "0x0" 00:05:00.435 } 00:05:00.435 }' 00:05:00.435 16:16:09 -- rpc/rpc.sh@43 -- # jq length 00:05:00.435 16:16:09 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:00.435 16:16:09 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:00.435 16:16:09 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:00.435 16:16:09 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:00.693 16:16:09 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:00.693 16:16:09 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:00.693 16:16:09 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:00.693 16:16:09 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:00.693 16:16:09 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:00.693 00:05:00.693 real 0m0.225s 00:05:00.693 user 0m0.189s 00:05:00.693 sys 0m0.029s 00:05:00.693 16:16:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.693 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.693 ************************************ 00:05:00.693 END TEST rpc_trace_cmd_test 00:05:00.693 ************************************ 00:05:00.693 16:16:09 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:00.693 16:16:09 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:00.693 16:16:09 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:00.693 16:16:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.693 16:16:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.693 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.951 ************************************ 00:05:00.951 START TEST rpc_daemon_integrity 00:05:00.951 ************************************ 00:05:00.951 16:16:09 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:00.951 16:16:09 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.951 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.951 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.951 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.951 16:16:09 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.951 16:16:09 -- rpc/rpc.sh@13 -- # jq length 00:05:00.951 16:16:09 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.951 16:16:09 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.951 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.951 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.951 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.951 16:16:09 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:00.951 16:16:09 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.951 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.951 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.951 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.951 16:16:09 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.951 { 00:05:00.951 "name": "Malloc2", 00:05:00.951 "aliases": [ 00:05:00.952 "c235d9e2-147b-4bcb-8587-431b6e6c5db4" 00:05:00.952 ], 00:05:00.952 "product_name": "Malloc disk", 00:05:00.952 "block_size": 512, 00:05:00.952 "num_blocks": 16384, 00:05:00.952 "uuid": "c235d9e2-147b-4bcb-8587-431b6e6c5db4", 00:05:00.952 "assigned_rate_limits": { 00:05:00.952 "rw_ios_per_sec": 0, 00:05:00.952 "rw_mbytes_per_sec": 0, 00:05:00.952 "r_mbytes_per_sec": 0, 00:05:00.952 "w_mbytes_per_sec": 0 00:05:00.952 }, 00:05:00.952 "claimed": false, 00:05:00.952 "zoned": false, 00:05:00.952 "supported_io_types": { 00:05:00.952 "read": true, 00:05:00.952 "write": true, 00:05:00.952 "unmap": true, 00:05:00.952 "write_zeroes": true, 00:05:00.952 "flush": true, 00:05:00.952 "reset": true, 00:05:00.952 "compare": false, 00:05:00.952 "compare_and_write": false, 00:05:00.952 "abort": true, 00:05:00.952 "nvme_admin": false, 00:05:00.952 "nvme_io": false 00:05:00.952 }, 00:05:00.952 "memory_domains": [ 00:05:00.952 { 00:05:00.952 "dma_device_id": "system", 00:05:00.952 "dma_device_type": 1 00:05:00.952 }, 00:05:00.952 { 00:05:00.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.952 "dma_device_type": 2 00:05:00.952 } 00:05:00.952 ], 00:05:00.952 "driver_specific": {} 00:05:00.952 } 00:05:00.952 ]' 00:05:00.952 16:16:09 -- rpc/rpc.sh@17 -- # jq length 00:05:00.952 16:16:09 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.952 16:16:09 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:00.952 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.952 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.952 [2024-04-26 16:16:09.913357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:00.952 [2024-04-26 16:16:09.913387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.952 [2024-04-26 16:16:09.913401] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b752b0 00:05:00.952 [2024-04-26 16:16:09.913409] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.952 [2024-04-26 16:16:09.914372] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.952 [2024-04-26 16:16:09.914393] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.952 Passthru0 00:05:00.952 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.952 16:16:09 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.952 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:00.952 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:00.952 16:16:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:00.952 16:16:09 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.952 { 00:05:00.952 "name": "Malloc2", 00:05:00.952 "aliases": [ 00:05:00.952 "c235d9e2-147b-4bcb-8587-431b6e6c5db4" 00:05:00.952 ], 00:05:00.952 "product_name": "Malloc disk", 00:05:00.952 "block_size": 512, 00:05:00.952 "num_blocks": 16384, 00:05:00.952 "uuid": "c235d9e2-147b-4bcb-8587-431b6e6c5db4", 00:05:00.952 "assigned_rate_limits": { 00:05:00.952 "rw_ios_per_sec": 0, 00:05:00.952 "rw_mbytes_per_sec": 0, 00:05:00.952 "r_mbytes_per_sec": 0, 00:05:00.952 "w_mbytes_per_sec": 0 00:05:00.952 }, 00:05:00.952 "claimed": true, 00:05:00.952 "claim_type": "exclusive_write", 00:05:00.952 "zoned": false, 00:05:00.952 "supported_io_types": { 00:05:00.952 "read": true, 00:05:00.952 "write": true, 00:05:00.952 "unmap": true, 00:05:00.952 "write_zeroes": true, 00:05:00.952 "flush": true, 00:05:00.952 "reset": true, 00:05:00.952 "compare": false, 00:05:00.952 "compare_and_write": false, 00:05:00.952 "abort": true, 00:05:00.952 "nvme_admin": false, 00:05:00.952 "nvme_io": false 00:05:00.952 }, 00:05:00.952 "memory_domains": [ 00:05:00.952 { 00:05:00.952 "dma_device_id": "system", 00:05:00.952 "dma_device_type": 1 00:05:00.952 }, 00:05:00.952 { 00:05:00.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.952 "dma_device_type": 2 00:05:00.952 } 00:05:00.952 ], 00:05:00.952 "driver_specific": {} 00:05:00.952 }, 00:05:00.952 { 00:05:00.952 "name": "Passthru0", 00:05:00.952 "aliases": [ 00:05:00.952 "a5781034-c049-5b40-b4bc-9685a3aa3eee" 00:05:00.952 ], 00:05:00.952 "product_name": "passthru", 00:05:00.952 "block_size": 512, 00:05:00.952 "num_blocks": 16384, 00:05:00.952 "uuid": "a5781034-c049-5b40-b4bc-9685a3aa3eee", 00:05:00.952 "assigned_rate_limits": { 00:05:00.952 "rw_ios_per_sec": 0, 00:05:00.952 "rw_mbytes_per_sec": 0, 00:05:00.952 "r_mbytes_per_sec": 0, 00:05:00.952 "w_mbytes_per_sec": 0 00:05:00.952 }, 00:05:00.952 "claimed": false, 00:05:00.952 "zoned": false, 00:05:00.952 "supported_io_types": { 00:05:00.952 "read": true, 00:05:00.952 "write": true, 00:05:00.952 "unmap": true, 00:05:00.952 "write_zeroes": true, 00:05:00.952 "flush": true, 00:05:00.952 "reset": true, 00:05:00.952 "compare": false, 00:05:00.952 "compare_and_write": false, 00:05:00.952 "abort": true, 00:05:00.952 "nvme_admin": false, 00:05:00.952 "nvme_io": false 00:05:00.952 }, 00:05:00.952 "memory_domains": [ 00:05:00.952 { 00:05:00.952 "dma_device_id": "system", 00:05:00.952 "dma_device_type": 1 00:05:00.952 }, 00:05:00.952 { 00:05:00.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.952 "dma_device_type": 2 00:05:00.952 } 00:05:00.952 ], 00:05:00.952 "driver_specific": { 00:05:00.952 "passthru": { 00:05:00.952 "name": "Passthru0", 00:05:00.952 "base_bdev_name": "Malloc2" 00:05:00.952 } 00:05:00.952 } 00:05:00.952 } 00:05:00.952 ]' 00:05:00.952 16:16:09 -- rpc/rpc.sh@21 -- # jq length 00:05:01.212 16:16:09 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.212 16:16:09 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.212 16:16:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:01.212 16:16:09 -- common/autotest_common.sh@10 -- # set +x 00:05:01.212 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:01.212 16:16:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.212 16:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:01.212 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.212 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:01.212 16:16:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.212 16:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:01.212 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.212 16:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:01.212 16:16:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.212 16:16:10 -- rpc/rpc.sh@26 -- # jq length 00:05:01.212 16:16:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.212 00:05:01.212 real 0m0.290s 00:05:01.212 user 0m0.183s 00:05:01.212 sys 0m0.047s 00:05:01.212 16:16:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.212 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.212 ************************************ 00:05:01.212 END TEST rpc_daemon_integrity 00:05:01.212 ************************************ 00:05:01.212 16:16:10 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.212 16:16:10 -- rpc/rpc.sh@84 -- # killprocess 326418 00:05:01.212 16:16:10 -- common/autotest_common.sh@936 -- # '[' -z 326418 ']' 00:05:01.212 16:16:10 -- common/autotest_common.sh@940 -- # kill -0 326418 00:05:01.212 16:16:10 -- common/autotest_common.sh@941 -- # uname 00:05:01.212 16:16:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:01.212 16:16:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 326418 00:05:01.212 16:16:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:01.212 16:16:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:01.212 16:16:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 326418' 00:05:01.212 killing process with pid 326418 00:05:01.212 16:16:10 -- common/autotest_common.sh@955 -- # kill 326418 00:05:01.212 16:16:10 -- common/autotest_common.sh@960 -- # wait 326418 00:05:01.783 00:05:01.783 real 0m3.123s 00:05:01.783 user 0m3.941s 00:05:01.783 sys 0m1.081s 00:05:01.783 16:16:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.783 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.783 ************************************ 00:05:01.783 END TEST rpc 00:05:01.783 ************************************ 00:05:01.783 16:16:10 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.783 16:16:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.783 16:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.783 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:01.783 ************************************ 00:05:01.783 START TEST skip_rpc 00:05:01.783 ************************************ 00:05:01.783 16:16:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:02.043 * Looking for test storage... 00:05:02.043 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:02.043 16:16:10 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:02.043 16:16:10 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:02.043 16:16:10 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:02.043 16:16:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:02.043 16:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:02.043 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:05:02.043 ************************************ 00:05:02.043 START TEST skip_rpc 00:05:02.043 ************************************ 00:05:02.043 16:16:10 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:02.043 16:16:10 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=327015 00:05:02.043 16:16:10 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.043 16:16:10 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.043 16:16:10 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.043 [2024-04-26 16:16:11.045603] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:02.043 [2024-04-26 16:16:11.045650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327015 ] 00:05:02.304 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.304 [2024-04-26 16:16:11.118887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.304 [2024-04-26 16:16:11.203080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.629 16:16:15 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.629 16:16:16 -- common/autotest_common.sh@638 -- # local es=0 00:05:07.629 16:16:16 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.629 16:16:16 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:07.629 16:16:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.629 16:16:16 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:07.629 16:16:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:07.629 16:16:16 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:07.629 16:16:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:07.629 16:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:07.629 16:16:16 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:07.629 16:16:16 -- common/autotest_common.sh@641 -- # es=1 00:05:07.629 16:16:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:07.629 16:16:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:07.629 16:16:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:07.629 16:16:16 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.629 16:16:16 -- rpc/skip_rpc.sh@23 -- # killprocess 327015 00:05:07.629 16:16:16 -- common/autotest_common.sh@936 -- # '[' -z 327015 ']' 00:05:07.629 16:16:16 -- common/autotest_common.sh@940 -- # kill -0 327015 00:05:07.629 16:16:16 -- common/autotest_common.sh@941 -- # uname 00:05:07.629 16:16:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:07.629 16:16:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 327015 00:05:07.629 16:16:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:07.629 16:16:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:07.629 16:16:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 327015' 00:05:07.629 killing process with pid 327015 00:05:07.629 16:16:16 -- common/autotest_common.sh@955 -- # kill 327015 00:05:07.629 16:16:16 -- common/autotest_common.sh@960 -- # wait 327015 00:05:07.629 00:05:07.629 real 0m5.412s 00:05:07.629 user 0m5.139s 00:05:07.629 sys 0m0.303s 00:05:07.629 16:16:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.629 16:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:07.629 ************************************ 00:05:07.629 END TEST skip_rpc 00:05:07.629 ************************************ 00:05:07.629 16:16:16 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.629 16:16:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.629 16:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.629 16:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:07.629 ************************************ 00:05:07.629 START TEST skip_rpc_with_json 00:05:07.629 ************************************ 00:05:07.629 16:16:16 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:07.629 16:16:16 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.629 16:16:16 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=327787 00:05:07.629 16:16:16 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.629 16:16:16 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.629 16:16:16 -- rpc/skip_rpc.sh@31 -- # waitforlisten 327787 00:05:07.629 16:16:16 -- common/autotest_common.sh@817 -- # '[' -z 327787 ']' 00:05:07.629 16:16:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.629 16:16:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:07.629 16:16:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.629 16:16:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:07.629 16:16:16 -- common/autotest_common.sh@10 -- # set +x 00:05:07.629 [2024-04-26 16:16:16.624806] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:07.629 [2024-04-26 16:16:16.624862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid327787 ] 00:05:07.889 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.889 [2024-04-26 16:16:16.697840] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.889 [2024-04-26 16:16:16.784468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.459 16:16:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:08.459 16:16:17 -- common/autotest_common.sh@850 -- # return 0 00:05:08.459 16:16:17 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:08.459 16:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:08.459 16:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:08.459 [2024-04-26 16:16:17.425543] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:08.459 request: 00:05:08.459 { 00:05:08.459 "trtype": "tcp", 00:05:08.459 "method": "nvmf_get_transports", 00:05:08.459 "req_id": 1 00:05:08.459 } 00:05:08.459 Got JSON-RPC error response 00:05:08.459 response: 00:05:08.459 { 00:05:08.459 "code": -19, 00:05:08.459 "message": "No such device" 00:05:08.459 } 00:05:08.459 16:16:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:08.459 16:16:17 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:08.459 16:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:08.459 16:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:08.459 [2024-04-26 16:16:17.437638] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.459 16:16:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:08.459 16:16:17 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:08.459 16:16:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:08.459 16:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:08.718 16:16:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:08.718 16:16:17 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:08.718 { 00:05:08.718 "subsystems": [ 00:05:08.718 { 00:05:08.718 "subsystem": "keyring", 00:05:08.718 "config": [] 00:05:08.718 }, 00:05:08.718 { 00:05:08.718 "subsystem": "iobuf", 00:05:08.718 "config": [ 00:05:08.718 { 00:05:08.718 "method": "iobuf_set_options", 00:05:08.718 "params": { 00:05:08.718 "small_pool_count": 8192, 00:05:08.718 "large_pool_count": 1024, 00:05:08.718 "small_bufsize": 8192, 00:05:08.718 "large_bufsize": 135168 00:05:08.718 } 00:05:08.718 } 00:05:08.718 ] 00:05:08.718 }, 00:05:08.718 { 00:05:08.718 "subsystem": "sock", 00:05:08.718 "config": [ 00:05:08.718 { 00:05:08.719 "method": "sock_impl_set_options", 00:05:08.719 "params": { 00:05:08.719 "impl_name": "posix", 00:05:08.719 "recv_buf_size": 2097152, 00:05:08.719 "send_buf_size": 2097152, 00:05:08.719 "enable_recv_pipe": true, 00:05:08.719 "enable_quickack": false, 00:05:08.719 "enable_placement_id": 0, 00:05:08.719 "enable_zerocopy_send_server": true, 00:05:08.719 "enable_zerocopy_send_client": false, 00:05:08.719 "zerocopy_threshold": 0, 00:05:08.719 "tls_version": 0, 00:05:08.719 "enable_ktls": false 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "sock_impl_set_options", 00:05:08.719 "params": { 00:05:08.719 "impl_name": "ssl", 00:05:08.719 "recv_buf_size": 4096, 00:05:08.719 "send_buf_size": 4096, 00:05:08.719 "enable_recv_pipe": true, 00:05:08.719 "enable_quickack": false, 00:05:08.719 "enable_placement_id": 0, 00:05:08.719 "enable_zerocopy_send_server": true, 00:05:08.719 "enable_zerocopy_send_client": false, 00:05:08.719 "zerocopy_threshold": 0, 00:05:08.719 "tls_version": 0, 00:05:08.719 "enable_ktls": false 00:05:08.719 } 00:05:08.719 } 00:05:08.719 ] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "vmd", 00:05:08.719 "config": [] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "accel", 00:05:08.719 "config": [ 00:05:08.719 { 00:05:08.719 "method": "accel_set_options", 00:05:08.719 "params": { 00:05:08.719 "small_cache_size": 128, 00:05:08.719 "large_cache_size": 16, 00:05:08.719 "task_count": 2048, 00:05:08.719 "sequence_count": 2048, 00:05:08.719 "buf_count": 2048 00:05:08.719 } 00:05:08.719 } 00:05:08.719 ] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "bdev", 00:05:08.719 "config": [ 00:05:08.719 { 00:05:08.719 "method": "bdev_set_options", 00:05:08.719 "params": { 00:05:08.719 "bdev_io_pool_size": 65535, 00:05:08.719 "bdev_io_cache_size": 256, 00:05:08.719 "bdev_auto_examine": true, 00:05:08.719 "iobuf_small_cache_size": 128, 00:05:08.719 "iobuf_large_cache_size": 16 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "bdev_raid_set_options", 00:05:08.719 "params": { 00:05:08.719 "process_window_size_kb": 1024 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "bdev_iscsi_set_options", 00:05:08.719 "params": { 00:05:08.719 "timeout_sec": 30 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "bdev_nvme_set_options", 00:05:08.719 "params": { 00:05:08.719 "action_on_timeout": "none", 00:05:08.719 "timeout_us": 0, 00:05:08.719 "timeout_admin_us": 0, 00:05:08.719 "keep_alive_timeout_ms": 10000, 00:05:08.719 "arbitration_burst": 0, 00:05:08.719 "low_priority_weight": 0, 00:05:08.719 "medium_priority_weight": 0, 00:05:08.719 "high_priority_weight": 0, 00:05:08.719 "nvme_adminq_poll_period_us": 10000, 00:05:08.719 "nvme_ioq_poll_period_us": 0, 00:05:08.719 "io_queue_requests": 0, 00:05:08.719 "delay_cmd_submit": true, 00:05:08.719 "transport_retry_count": 4, 00:05:08.719 "bdev_retry_count": 3, 00:05:08.719 "transport_ack_timeout": 0, 00:05:08.719 "ctrlr_loss_timeout_sec": 0, 00:05:08.719 "reconnect_delay_sec": 0, 00:05:08.719 "fast_io_fail_timeout_sec": 0, 00:05:08.719 "disable_auto_failback": false, 00:05:08.719 "generate_uuids": false, 00:05:08.719 "transport_tos": 0, 00:05:08.719 "nvme_error_stat": false, 00:05:08.719 "rdma_srq_size": 0, 00:05:08.719 "io_path_stat": false, 00:05:08.719 "allow_accel_sequence": false, 00:05:08.719 "rdma_max_cq_size": 0, 00:05:08.719 "rdma_cm_event_timeout_ms": 0, 00:05:08.719 "dhchap_digests": [ 00:05:08.719 "sha256", 00:05:08.719 "sha384", 00:05:08.719 "sha512" 00:05:08.719 ], 00:05:08.719 "dhchap_dhgroups": [ 00:05:08.719 "null", 00:05:08.719 "ffdhe2048", 00:05:08.719 "ffdhe3072", 00:05:08.719 "ffdhe4096", 00:05:08.719 "ffdhe6144", 00:05:08.719 "ffdhe8192" 00:05:08.719 ] 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "bdev_nvme_set_hotplug", 00:05:08.719 "params": { 00:05:08.719 "period_us": 100000, 00:05:08.719 "enable": false 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "bdev_wait_for_examine" 00:05:08.719 } 00:05:08.719 ] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "scsi", 00:05:08.719 "config": null 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "scheduler", 00:05:08.719 "config": [ 00:05:08.719 { 00:05:08.719 "method": "framework_set_scheduler", 00:05:08.719 "params": { 00:05:08.719 "name": "static" 00:05:08.719 } 00:05:08.719 } 00:05:08.719 ] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "vhost_scsi", 00:05:08.719 "config": [] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "vhost_blk", 00:05:08.719 "config": [] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "ublk", 00:05:08.719 "config": [] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "nbd", 00:05:08.719 "config": [] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "nvmf", 00:05:08.719 "config": [ 00:05:08.719 { 00:05:08.719 "method": "nvmf_set_config", 00:05:08.719 "params": { 00:05:08.719 "discovery_filter": "match_any", 00:05:08.719 "admin_cmd_passthru": { 00:05:08.719 "identify_ctrlr": false 00:05:08.719 } 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "nvmf_set_max_subsystems", 00:05:08.719 "params": { 00:05:08.719 "max_subsystems": 1024 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "nvmf_set_crdt", 00:05:08.719 "params": { 00:05:08.719 "crdt1": 0, 00:05:08.719 "crdt2": 0, 00:05:08.719 "crdt3": 0 00:05:08.719 } 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "method": "nvmf_create_transport", 00:05:08.719 "params": { 00:05:08.719 "trtype": "TCP", 00:05:08.719 "max_queue_depth": 128, 00:05:08.719 "max_io_qpairs_per_ctrlr": 127, 00:05:08.719 "in_capsule_data_size": 4096, 00:05:08.719 "max_io_size": 131072, 00:05:08.719 "io_unit_size": 131072, 00:05:08.719 "max_aq_depth": 128, 00:05:08.719 "num_shared_buffers": 511, 00:05:08.719 "buf_cache_size": 4294967295, 00:05:08.719 "dif_insert_or_strip": false, 00:05:08.719 "zcopy": false, 00:05:08.719 "c2h_success": true, 00:05:08.719 "sock_priority": 0, 00:05:08.719 "abort_timeout_sec": 1, 00:05:08.719 "ack_timeout": 0, 00:05:08.719 "data_wr_pool_size": 0 00:05:08.719 } 00:05:08.719 } 00:05:08.719 ] 00:05:08.719 }, 00:05:08.719 { 00:05:08.719 "subsystem": "iscsi", 00:05:08.719 "config": [ 00:05:08.719 { 00:05:08.719 "method": "iscsi_set_options", 00:05:08.719 "params": { 00:05:08.719 "node_base": "iqn.2016-06.io.spdk", 00:05:08.719 "max_sessions": 128, 00:05:08.719 "max_connections_per_session": 2, 00:05:08.719 "max_queue_depth": 64, 00:05:08.719 "default_time2wait": 2, 00:05:08.719 "default_time2retain": 20, 00:05:08.719 "first_burst_length": 8192, 00:05:08.719 "immediate_data": true, 00:05:08.719 "allow_duplicated_isid": false, 00:05:08.719 "error_recovery_level": 0, 00:05:08.719 "nop_timeout": 60, 00:05:08.719 "nop_in_interval": 30, 00:05:08.719 "disable_chap": false, 00:05:08.719 "require_chap": false, 00:05:08.719 "mutual_chap": false, 00:05:08.719 "chap_group": 0, 00:05:08.719 "max_large_datain_per_connection": 64, 00:05:08.719 "max_r2t_per_connection": 4, 00:05:08.719 "pdu_pool_size": 36864, 00:05:08.719 "immediate_data_pool_size": 16384, 00:05:08.719 "data_out_pool_size": 2048 00:05:08.719 } 00:05:08.719 } 00:05:08.719 ] 00:05:08.719 } 00:05:08.719 ] 00:05:08.719 } 00:05:08.719 16:16:17 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:08.719 16:16:17 -- rpc/skip_rpc.sh@40 -- # killprocess 327787 00:05:08.719 16:16:17 -- common/autotest_common.sh@936 -- # '[' -z 327787 ']' 00:05:08.719 16:16:17 -- common/autotest_common.sh@940 -- # kill -0 327787 00:05:08.719 16:16:17 -- common/autotest_common.sh@941 -- # uname 00:05:08.719 16:16:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:08.719 16:16:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 327787 00:05:08.719 16:16:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:08.719 16:16:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:08.719 16:16:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 327787' 00:05:08.719 killing process with pid 327787 00:05:08.719 16:16:17 -- common/autotest_common.sh@955 -- # kill 327787 00:05:08.719 16:16:17 -- common/autotest_common.sh@960 -- # wait 327787 00:05:08.979 16:16:17 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=328001 00:05:08.979 16:16:17 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.979 16:16:17 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:14.254 16:16:23 -- rpc/skip_rpc.sh@50 -- # killprocess 328001 00:05:14.254 16:16:23 -- common/autotest_common.sh@936 -- # '[' -z 328001 ']' 00:05:14.254 16:16:23 -- common/autotest_common.sh@940 -- # kill -0 328001 00:05:14.254 16:16:23 -- common/autotest_common.sh@941 -- # uname 00:05:14.254 16:16:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.254 16:16:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 328001 00:05:14.254 16:16:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.254 16:16:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.254 16:16:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 328001' 00:05:14.254 killing process with pid 328001 00:05:14.254 16:16:23 -- common/autotest_common.sh@955 -- # kill 328001 00:05:14.254 16:16:23 -- common/autotest_common.sh@960 -- # wait 328001 00:05:14.512 16:16:23 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:14.512 16:16:23 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:14.512 00:05:14.512 real 0m6.813s 00:05:14.512 user 0m6.544s 00:05:14.512 sys 0m0.671s 00:05:14.512 16:16:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.512 16:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.512 ************************************ 00:05:14.512 END TEST skip_rpc_with_json 00:05:14.512 ************************************ 00:05:14.512 16:16:23 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.512 16:16:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.512 16:16:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.512 16:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.771 ************************************ 00:05:14.771 START TEST skip_rpc_with_delay 00:05:14.772 ************************************ 00:05:14.772 16:16:23 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:14.772 16:16:23 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.772 16:16:23 -- common/autotest_common.sh@638 -- # local es=0 00:05:14.772 16:16:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.772 16:16:23 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.772 16:16:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:14.772 16:16:23 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.772 16:16:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:14.772 16:16:23 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.772 16:16:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:14.772 16:16:23 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.772 16:16:23 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.772 16:16:23 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.772 [2024-04-26 16:16:23.637595] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.772 [2024-04-26 16:16:23.637689] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:14.772 16:16:23 -- common/autotest_common.sh@641 -- # es=1 00:05:14.772 16:16:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:14.772 16:16:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:14.772 16:16:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:14.772 00:05:14.772 real 0m0.072s 00:05:14.772 user 0m0.046s 00:05:14.772 sys 0m0.025s 00:05:14.772 16:16:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:14.772 16:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:14.772 ************************************ 00:05:14.772 END TEST skip_rpc_with_delay 00:05:14.772 ************************************ 00:05:14.772 16:16:23 -- rpc/skip_rpc.sh@77 -- # uname 00:05:14.772 16:16:23 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:14.772 16:16:23 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:14.772 16:16:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:14.772 16:16:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:14.772 16:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:15.031 ************************************ 00:05:15.031 START TEST exit_on_failed_rpc_init 00:05:15.031 ************************************ 00:05:15.031 16:16:23 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:15.031 16:16:23 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=328931 00:05:15.031 16:16:23 -- rpc/skip_rpc.sh@63 -- # waitforlisten 328931 00:05:15.031 16:16:23 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.031 16:16:23 -- common/autotest_common.sh@817 -- # '[' -z 328931 ']' 00:05:15.031 16:16:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.031 16:16:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.031 16:16:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.031 16:16:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.031 16:16:23 -- common/autotest_common.sh@10 -- # set +x 00:05:15.031 [2024-04-26 16:16:23.915662] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:15.031 [2024-04-26 16:16:23.915726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328931 ] 00:05:15.031 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.031 [2024-04-26 16:16:23.989685] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.290 [2024-04-26 16:16:24.075766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.858 16:16:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:15.858 16:16:24 -- common/autotest_common.sh@850 -- # return 0 00:05:15.858 16:16:24 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.858 16:16:24 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.858 16:16:24 -- common/autotest_common.sh@638 -- # local es=0 00:05:15.858 16:16:24 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.858 16:16:24 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.858 16:16:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:15.858 16:16:24 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.858 16:16:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:15.858 16:16:24 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.858 16:16:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:15.858 16:16:24 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.858 16:16:24 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:15.858 16:16:24 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:15.858 [2024-04-26 16:16:24.748106] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:15.858 [2024-04-26 16:16:24.748164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid328955 ] 00:05:15.858 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.858 [2024-04-26 16:16:24.821062] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.118 [2024-04-26 16:16:24.902603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.118 [2024-04-26 16:16:24.902670] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:16.118 [2024-04-26 16:16:24.902682] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:16.118 [2024-04-26 16:16:24.902690] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.118 16:16:24 -- common/autotest_common.sh@641 -- # es=234 00:05:16.118 16:16:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:16.118 16:16:24 -- common/autotest_common.sh@650 -- # es=106 00:05:16.118 16:16:24 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:16.118 16:16:24 -- common/autotest_common.sh@658 -- # es=1 00:05:16.118 16:16:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:16.118 16:16:25 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.118 16:16:25 -- rpc/skip_rpc.sh@70 -- # killprocess 328931 00:05:16.118 16:16:25 -- common/autotest_common.sh@936 -- # '[' -z 328931 ']' 00:05:16.118 16:16:25 -- common/autotest_common.sh@940 -- # kill -0 328931 00:05:16.118 16:16:25 -- common/autotest_common.sh@941 -- # uname 00:05:16.118 16:16:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.118 16:16:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 328931 00:05:16.118 16:16:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.118 16:16:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.118 16:16:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 328931' 00:05:16.118 killing process with pid 328931 00:05:16.118 16:16:25 -- common/autotest_common.sh@955 -- # kill 328931 00:05:16.118 16:16:25 -- common/autotest_common.sh@960 -- # wait 328931 00:05:16.377 00:05:16.377 real 0m1.535s 00:05:16.377 user 0m1.724s 00:05:16.377 sys 0m0.452s 00:05:16.377 16:16:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.377 16:16:25 -- common/autotest_common.sh@10 -- # set +x 00:05:16.377 ************************************ 00:05:16.377 END TEST exit_on_failed_rpc_init 00:05:16.377 ************************************ 00:05:16.636 16:16:25 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:16.636 00:05:16.636 real 0m14.709s 00:05:16.636 user 0m13.766s 00:05:16.636 sys 0m1.996s 00:05:16.636 16:16:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.636 16:16:25 -- common/autotest_common.sh@10 -- # set +x 00:05:16.636 ************************************ 00:05:16.636 END TEST skip_rpc 00:05:16.636 ************************************ 00:05:16.636 16:16:25 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:16.636 16:16:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.636 16:16:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.636 16:16:25 -- common/autotest_common.sh@10 -- # set +x 00:05:16.636 ************************************ 00:05:16.636 START TEST rpc_client 00:05:16.636 ************************************ 00:05:16.636 16:16:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:16.896 * Looking for test storage... 00:05:16.896 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:16.896 16:16:25 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:16.896 OK 00:05:16.896 16:16:25 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:16.896 00:05:16.896 real 0m0.138s 00:05:16.896 user 0m0.061s 00:05:16.896 sys 0m0.086s 00:05:16.896 16:16:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:16.896 16:16:25 -- common/autotest_common.sh@10 -- # set +x 00:05:16.896 ************************************ 00:05:16.896 END TEST rpc_client 00:05:16.896 ************************************ 00:05:16.896 16:16:25 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:16.896 16:16:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.896 16:16:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.896 16:16:25 -- common/autotest_common.sh@10 -- # set +x 00:05:17.156 ************************************ 00:05:17.156 START TEST json_config 00:05:17.156 ************************************ 00:05:17.156 16:16:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.156 16:16:26 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.156 16:16:26 -- nvmf/common.sh@7 -- # uname -s 00:05:17.156 16:16:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.156 16:16:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.156 16:16:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.156 16:16:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.156 16:16:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.156 16:16:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.156 16:16:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.156 16:16:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.156 16:16:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.156 16:16:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.156 16:16:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:05:17.156 16:16:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:05:17.156 16:16:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.156 16:16:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.156 16:16:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.156 16:16:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.156 16:16:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:17.156 16:16:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.156 16:16:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.156 16:16:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.156 16:16:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.156 16:16:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.156 16:16:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.156 16:16:26 -- paths/export.sh@5 -- # export PATH 00:05:17.156 16:16:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.156 16:16:26 -- nvmf/common.sh@47 -- # : 0 00:05:17.156 16:16:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:17.156 16:16:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:17.156 16:16:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.156 16:16:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.156 16:16:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.156 16:16:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:17.156 16:16:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:17.156 16:16:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:17.156 16:16:26 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:17.156 16:16:26 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:17.156 16:16:26 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:17.156 16:16:26 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:17.156 16:16:26 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:17.156 16:16:26 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:17.156 16:16:26 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:17.156 16:16:26 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:17.156 16:16:26 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:17.156 16:16:26 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:17.156 16:16:26 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:17.156 16:16:26 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:17.156 16:16:26 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:17.156 16:16:26 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:17.156 16:16:26 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.156 16:16:26 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:17.156 INFO: JSON configuration test init 00:05:17.156 16:16:26 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:17.156 16:16:26 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:17.156 16:16:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:17.156 16:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.156 16:16:26 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:17.156 16:16:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:17.156 16:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.156 16:16:26 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:17.156 16:16:26 -- json_config/common.sh@9 -- # local app=target 00:05:17.156 16:16:26 -- json_config/common.sh@10 -- # shift 00:05:17.156 16:16:26 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.156 16:16:26 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.156 16:16:26 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.156 16:16:26 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.156 16:16:26 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.156 16:16:26 -- json_config/common.sh@22 -- # app_pid["$app"]=329269 00:05:17.156 16:16:26 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.156 Waiting for target to run... 00:05:17.156 16:16:26 -- json_config/common.sh@25 -- # waitforlisten 329269 /var/tmp/spdk_tgt.sock 00:05:17.156 16:16:26 -- common/autotest_common.sh@817 -- # '[' -z 329269 ']' 00:05:17.156 16:16:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.156 16:16:26 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:17.157 16:16:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:17.157 16:16:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.157 16:16:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:17.157 16:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.157 [2024-04-26 16:16:26.156082] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:17.157 [2024-04-26 16:16:26.156153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid329269 ] 00:05:17.416 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.675 [2024-04-26 16:16:26.486071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.675 [2024-04-26 16:16:26.552747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.934 16:16:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.934 16:16:26 -- common/autotest_common.sh@850 -- # return 0 00:05:17.934 16:16:26 -- json_config/common.sh@26 -- # echo '' 00:05:17.934 00:05:17.934 16:16:26 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:17.934 16:16:26 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:17.934 16:16:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:17.934 16:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:17.934 16:16:26 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:17.935 16:16:26 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:17.935 16:16:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:17.935 16:16:26 -- common/autotest_common.sh@10 -- # set +x 00:05:18.194 16:16:26 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:18.194 16:16:26 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:18.194 16:16:26 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:19.132 16:16:28 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:19.132 16:16:28 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:19.132 16:16:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:19.132 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:19.392 16:16:28 -- json_config/json_config.sh@45 -- # local ret=0 00:05:19.392 16:16:28 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:19.392 16:16:28 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:19.392 16:16:28 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:19.392 16:16:28 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:19.392 16:16:28 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:19.392 16:16:28 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:19.392 16:16:28 -- json_config/json_config.sh@48 -- # local get_types 00:05:19.392 16:16:28 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:19.392 16:16:28 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:19.392 16:16:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:19.392 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:19.392 16:16:28 -- json_config/json_config.sh@55 -- # return 0 00:05:19.392 16:16:28 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:19.392 16:16:28 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:19.392 16:16:28 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:19.392 16:16:28 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:19.392 16:16:28 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:19.392 16:16:28 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:19.392 16:16:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:19.392 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:19.392 16:16:28 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:19.392 16:16:28 -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:19.392 16:16:28 -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:19.392 16:16:28 -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:19.392 16:16:28 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:05:19.392 16:16:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:19.392 16:16:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:05:19.392 16:16:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:05:19.392 16:16:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:05:19.392 16:16:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.392 16:16:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:19.392 16:16:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:19.392 16:16:28 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:05:19.392 16:16:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:05:19.392 16:16:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:05:19.392 16:16:28 -- common/autotest_common.sh@10 -- # set +x 00:05:25.964 16:16:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:05:25.964 16:16:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:05:25.964 16:16:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:25.964 16:16:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:25.964 16:16:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:25.964 16:16:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:25.964 16:16:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:25.964 16:16:34 -- nvmf/common.sh@295 -- # net_devs=() 00:05:25.964 16:16:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:25.964 16:16:34 -- nvmf/common.sh@296 -- # e810=() 00:05:25.964 16:16:34 -- nvmf/common.sh@296 -- # local -ga e810 00:05:25.964 16:16:34 -- nvmf/common.sh@297 -- # x722=() 00:05:25.964 16:16:34 -- nvmf/common.sh@297 -- # local -ga x722 00:05:25.964 16:16:34 -- nvmf/common.sh@298 -- # mlx=() 00:05:25.964 16:16:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:05:25.964 16:16:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:25.964 16:16:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:25.964 16:16:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:25.964 16:16:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:25.964 16:16:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:25.964 16:16:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:25.964 16:16:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:05:25.964 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:05:25.964 16:16:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:25.964 16:16:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:05:25.964 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:05:25.964 16:16:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:25.964 16:16:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:25.964 16:16:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:25.964 16:16:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:25.964 16:16:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:25.964 16:16:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:25.964 Found net devices under 0000:18:00.0: mlx_0_0 00:05:25.964 16:16:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:25.964 16:16:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:25.964 16:16:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:05:25.964 16:16:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:25.964 16:16:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:25.964 Found net devices under 0000:18:00.1: mlx_0_1 00:05:25.964 16:16:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:05:25.964 16:16:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:05:25.964 16:16:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:05:25.964 16:16:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:05:25.964 16:16:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:05:25.964 16:16:34 -- nvmf/common.sh@58 -- # uname 00:05:25.964 16:16:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:25.964 16:16:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:25.964 16:16:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:25.964 16:16:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:25.964 16:16:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:25.964 16:16:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:25.964 16:16:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:25.964 16:16:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:25.964 16:16:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:05:25.964 16:16:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:25.964 16:16:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:25.964 16:16:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:25.964 16:16:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:25.964 16:16:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:25.964 16:16:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:25.964 16:16:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:25.964 16:16:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:25.964 16:16:34 -- nvmf/common.sh@105 -- # continue 2 00:05:25.964 16:16:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:25.964 16:16:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:25.964 16:16:34 -- nvmf/common.sh@105 -- # continue 2 00:05:25.964 16:16:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:25.964 16:16:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:25.964 16:16:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:25.964 16:16:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:25.964 16:16:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:25.964 16:16:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:25.964 16:16:34 -- nvmf/common.sh@74 -- # ip= 00:05:25.964 16:16:34 -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:25.964 16:16:34 -- nvmf/common.sh@76 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:05:25.964 16:16:34 -- nvmf/common.sh@77 -- # ip link set mlx_0_0 up 00:05:25.964 16:16:34 -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:25.964 16:16:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:25.964 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:25.964 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:05:25.964 altname enp24s0f0np0 00:05:25.964 altname ens785f0np0 00:05:25.964 inet 192.168.100.8/24 scope global mlx_0_0 00:05:25.964 valid_lft forever preferred_lft forever 00:05:25.964 16:16:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:25.964 16:16:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:25.965 16:16:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:25.965 16:16:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:25.965 16:16:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:25.965 16:16:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:25.965 16:16:34 -- nvmf/common.sh@74 -- # ip= 00:05:25.965 16:16:34 -- nvmf/common.sh@75 -- # [[ -z '' ]] 00:05:26.224 16:16:34 -- nvmf/common.sh@76 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:05:26.224 16:16:34 -- nvmf/common.sh@77 -- # ip link set mlx_0_1 up 00:05:26.224 16:16:34 -- nvmf/common.sh@78 -- # (( count = count + 1 )) 00:05:26.224 16:16:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:26.224 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:26.224 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:05:26.224 altname enp24s0f1np1 00:05:26.224 altname ens785f1np1 00:05:26.224 inet 192.168.100.9/24 scope global mlx_0_1 00:05:26.224 valid_lft forever preferred_lft forever 00:05:26.224 16:16:35 -- nvmf/common.sh@411 -- # return 0 00:05:26.224 16:16:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:05:26.224 16:16:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:26.224 16:16:35 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:05:26.224 16:16:35 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:05:26.224 16:16:35 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:26.224 16:16:35 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:26.224 16:16:35 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:26.224 16:16:35 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:26.224 16:16:35 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:26.224 16:16:35 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:26.224 16:16:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:26.224 16:16:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.224 16:16:35 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:26.224 16:16:35 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:26.224 16:16:35 -- nvmf/common.sh@105 -- # continue 2 00:05:26.224 16:16:35 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:26.224 16:16:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.224 16:16:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:26.224 16:16:35 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.224 16:16:35 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:26.224 16:16:35 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:26.224 16:16:35 -- nvmf/common.sh@105 -- # continue 2 00:05:26.224 16:16:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:26.224 16:16:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:26.224 16:16:35 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:26.224 16:16:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:26.224 16:16:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:26.224 16:16:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:26.224 16:16:35 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:26.224 16:16:35 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:26.224 16:16:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:26.224 16:16:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:26.224 16:16:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:26.224 16:16:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:26.224 16:16:35 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:05:26.224 192.168.100.9' 00:05:26.224 16:16:35 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:05:26.224 192.168.100.9' 00:05:26.224 16:16:35 -- nvmf/common.sh@446 -- # head -n 1 00:05:26.224 16:16:35 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:26.224 16:16:35 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:05:26.224 192.168.100.9' 00:05:26.224 16:16:35 -- nvmf/common.sh@447 -- # tail -n +2 00:05:26.224 16:16:35 -- nvmf/common.sh@447 -- # head -n 1 00:05:26.224 16:16:35 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:26.224 16:16:35 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:05:26.224 16:16:35 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:26.224 16:16:35 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:05:26.224 16:16:35 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:05:26.224 16:16:35 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:05:26.224 16:16:35 -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:26.224 16:16:35 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.224 16:16:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.483 MallocForNvmf0 00:05:26.483 16:16:35 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.483 16:16:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.483 MallocForNvmf1 00:05:26.483 16:16:35 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:26.483 16:16:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:26.742 [2024-04-26 16:16:35.595046] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:26.742 [2024-04-26 16:16:35.641412] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb51630/0xbf3400) succeed. 00:05:26.742 [2024-04-26 16:16:35.652352] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb53820/0xb5e380) succeed. 00:05:26.742 16:16:35 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:26.742 16:16:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:27.002 16:16:35 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:27.002 16:16:35 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:27.262 16:16:36 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:27.262 16:16:36 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:27.262 16:16:36 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:27.262 16:16:36 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:27.521 [2024-04-26 16:16:36.361886] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:27.521 16:16:36 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:27.521 16:16:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.521 16:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:27.521 16:16:36 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:27.521 16:16:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.521 16:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:27.521 16:16:36 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:27.521 16:16:36 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:27.521 16:16:36 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:27.781 MallocBdevForConfigChangeCheck 00:05:27.781 16:16:36 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:27.781 16:16:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:27.781 16:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:27.781 16:16:36 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:27.781 16:16:36 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.040 16:16:36 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:28.040 INFO: shutting down applications... 00:05:28.040 16:16:36 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:28.040 16:16:36 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:28.040 16:16:36 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:28.040 16:16:36 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:28.609 Calling clear_iscsi_subsystem 00:05:28.609 Calling clear_nvmf_subsystem 00:05:28.609 Calling clear_nbd_subsystem 00:05:28.609 Calling clear_ublk_subsystem 00:05:28.609 Calling clear_vhost_blk_subsystem 00:05:28.609 Calling clear_vhost_scsi_subsystem 00:05:28.609 Calling clear_bdev_subsystem 00:05:28.609 16:16:37 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:28.609 16:16:37 -- json_config/json_config.sh@343 -- # count=100 00:05:28.609 16:16:37 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:28.609 16:16:37 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.609 16:16:37 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:28.609 16:16:37 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:28.869 16:16:37 -- json_config/json_config.sh@345 -- # break 00:05:28.870 16:16:37 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:28.870 16:16:37 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:28.870 16:16:37 -- json_config/common.sh@31 -- # local app=target 00:05:28.870 16:16:37 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:28.870 16:16:37 -- json_config/common.sh@35 -- # [[ -n 329269 ]] 00:05:28.870 16:16:37 -- json_config/common.sh@38 -- # kill -SIGINT 329269 00:05:28.870 16:16:37 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:28.870 16:16:37 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.870 16:16:37 -- json_config/common.sh@41 -- # kill -0 329269 00:05:28.870 16:16:37 -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.130 [2024-04-26 16:16:38.002508] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:29.389 16:16:38 -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.389 16:16:38 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.389 16:16:38 -- json_config/common.sh@41 -- # kill -0 329269 00:05:29.389 16:16:38 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.389 16:16:38 -- json_config/common.sh@43 -- # break 00:05:29.389 16:16:38 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.389 16:16:38 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.389 SPDK target shutdown done 00:05:29.389 16:16:38 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:29.389 INFO: relaunching applications... 00:05:29.389 16:16:38 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.389 16:16:38 -- json_config/common.sh@9 -- # local app=target 00:05:29.389 16:16:38 -- json_config/common.sh@10 -- # shift 00:05:29.389 16:16:38 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.389 16:16:38 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.389 16:16:38 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.389 16:16:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.389 16:16:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.389 16:16:38 -- json_config/common.sh@22 -- # app_pid["$app"]=333081 00:05:29.390 16:16:38 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.390 Waiting for target to run... 00:05:29.390 16:16:38 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.390 16:16:38 -- json_config/common.sh@25 -- # waitforlisten 333081 /var/tmp/spdk_tgt.sock 00:05:29.390 16:16:38 -- common/autotest_common.sh@817 -- # '[' -z 333081 ']' 00:05:29.390 16:16:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.390 16:16:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:29.390 16:16:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.390 16:16:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:29.390 16:16:38 -- common/autotest_common.sh@10 -- # set +x 00:05:29.649 [2024-04-26 16:16:38.451638] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:29.649 [2024-04-26 16:16:38.451704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333081 ] 00:05:29.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.219 [2024-04-26 16:16:38.978071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.219 [2024-04-26 16:16:39.067749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.598 [2024-04-26 16:16:40.191390] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ba65c0/0x1ba6a40) succeed. 00:05:31.598 [2024-04-26 16:16:40.202419] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ba5560/0x1beff80) succeed. 00:05:31.598 [2024-04-26 16:16:40.255121] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:32.166 16:16:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:32.166 16:16:40 -- common/autotest_common.sh@850 -- # return 0 00:05:32.166 16:16:40 -- json_config/common.sh@26 -- # echo '' 00:05:32.166 00:05:32.166 16:16:40 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:32.166 16:16:40 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:32.166 INFO: Checking if target configuration is the same... 00:05:32.166 16:16:40 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.166 16:16:40 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:32.166 16:16:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.166 + '[' 2 -ne 2 ']' 00:05:32.166 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.166 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:32.166 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:32.166 +++ basename /dev/fd/62 00:05:32.166 ++ mktemp /tmp/62.XXX 00:05:32.166 + tmp_file_1=/tmp/62.aWC 00:05:32.166 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.166 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.166 + tmp_file_2=/tmp/spdk_tgt_config.json.sGj 00:05:32.166 + ret=0 00:05:32.166 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.425 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.425 + diff -u /tmp/62.aWC /tmp/spdk_tgt_config.json.sGj 00:05:32.425 + echo 'INFO: JSON config files are the same' 00:05:32.425 INFO: JSON config files are the same 00:05:32.425 + rm /tmp/62.aWC /tmp/spdk_tgt_config.json.sGj 00:05:32.425 + exit 0 00:05:32.425 16:16:41 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:32.425 16:16:41 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:32.425 INFO: changing configuration and checking if this can be detected... 00:05:32.425 16:16:41 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.425 16:16:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:32.683 16:16:41 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.683 16:16:41 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:32.683 16:16:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.683 + '[' 2 -ne 2 ']' 00:05:32.683 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:32.683 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:32.683 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:32.683 +++ basename /dev/fd/62 00:05:32.683 ++ mktemp /tmp/62.XXX 00:05:32.683 + tmp_file_1=/tmp/62.YmS 00:05:32.683 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:32.683 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:32.683 + tmp_file_2=/tmp/spdk_tgt_config.json.r4p 00:05:32.683 + ret=0 00:05:32.683 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.942 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:32.942 + diff -u /tmp/62.YmS /tmp/spdk_tgt_config.json.r4p 00:05:32.942 + ret=1 00:05:32.942 + echo '=== Start of file: /tmp/62.YmS ===' 00:05:32.942 + cat /tmp/62.YmS 00:05:32.942 + echo '=== End of file: /tmp/62.YmS ===' 00:05:32.942 + echo '' 00:05:32.942 + echo '=== Start of file: /tmp/spdk_tgt_config.json.r4p ===' 00:05:32.942 + cat /tmp/spdk_tgt_config.json.r4p 00:05:32.943 + echo '=== End of file: /tmp/spdk_tgt_config.json.r4p ===' 00:05:32.943 + echo '' 00:05:32.943 + rm /tmp/62.YmS /tmp/spdk_tgt_config.json.r4p 00:05:32.943 + exit 1 00:05:32.943 16:16:41 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:32.943 INFO: configuration change detected. 00:05:32.943 16:16:41 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:32.943 16:16:41 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:32.943 16:16:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:32.943 16:16:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.943 16:16:41 -- json_config/json_config.sh@307 -- # local ret=0 00:05:32.943 16:16:41 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:32.943 16:16:41 -- json_config/json_config.sh@317 -- # [[ -n 333081 ]] 00:05:32.943 16:16:41 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:32.943 16:16:41 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:32.943 16:16:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:32.943 16:16:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.943 16:16:41 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:32.943 16:16:41 -- json_config/json_config.sh@193 -- # uname -s 00:05:32.943 16:16:41 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:32.943 16:16:41 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:32.943 16:16:41 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:32.943 16:16:41 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:32.943 16:16:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:32.943 16:16:41 -- common/autotest_common.sh@10 -- # set +x 00:05:32.943 16:16:41 -- json_config/json_config.sh@323 -- # killprocess 333081 00:05:32.943 16:16:41 -- common/autotest_common.sh@936 -- # '[' -z 333081 ']' 00:05:32.943 16:16:41 -- common/autotest_common.sh@940 -- # kill -0 333081 00:05:32.943 16:16:41 -- common/autotest_common.sh@941 -- # uname 00:05:32.943 16:16:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:32.943 16:16:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 333081 00:05:33.202 16:16:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.202 16:16:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.202 16:16:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 333081' 00:05:33.202 killing process with pid 333081 00:05:33.202 16:16:41 -- common/autotest_common.sh@955 -- # kill 333081 00:05:33.202 16:16:41 -- common/autotest_common.sh@960 -- # wait 333081 00:05:33.202 [2024-04-26 16:16:42.078014] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:05:33.771 16:16:42 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.771 16:16:42 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:33.771 16:16:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:33.771 16:16:42 -- common/autotest_common.sh@10 -- # set +x 00:05:33.771 16:16:42 -- json_config/json_config.sh@328 -- # return 0 00:05:33.771 16:16:42 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:33.771 INFO: Success 00:05:33.771 16:16:42 -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:33.771 16:16:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:05:33.771 16:16:42 -- nvmf/common.sh@117 -- # sync 00:05:33.771 16:16:42 -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:05:33.771 16:16:42 -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:05:33.771 16:16:42 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:05:33.771 16:16:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:05:33.771 16:16:42 -- nvmf/common.sh@484 -- # [[ '' == \t\c\p ]] 00:05:33.771 00:05:33.771 real 0m16.650s 00:05:33.771 user 0m18.943s 00:05:33.771 sys 0m7.374s 00:05:33.771 16:16:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.771 16:16:42 -- common/autotest_common.sh@10 -- # set +x 00:05:33.771 ************************************ 00:05:33.771 END TEST json_config 00:05:33.771 ************************************ 00:05:33.771 16:16:42 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:33.771 16:16:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.771 16:16:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.771 16:16:42 -- common/autotest_common.sh@10 -- # set +x 00:05:34.031 ************************************ 00:05:34.031 START TEST json_config_extra_key 00:05:34.031 ************************************ 00:05:34.031 16:16:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:34.031 16:16:42 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:34.031 16:16:42 -- nvmf/common.sh@7 -- # uname -s 00:05:34.031 16:16:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.031 16:16:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.031 16:16:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.031 16:16:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.031 16:16:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.031 16:16:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.031 16:16:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.031 16:16:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.031 16:16:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.031 16:16:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.031 16:16:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:05:34.031 16:16:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:05:34.031 16:16:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.031 16:16:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.031 16:16:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.031 16:16:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.031 16:16:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:34.032 16:16:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.032 16:16:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.032 16:16:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.032 16:16:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.032 16:16:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.032 16:16:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.032 16:16:42 -- paths/export.sh@5 -- # export PATH 00:05:34.032 16:16:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.032 16:16:42 -- nvmf/common.sh@47 -- # : 0 00:05:34.032 16:16:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:34.032 16:16:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:34.032 16:16:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.032 16:16:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.032 16:16:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.032 16:16:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:34.032 16:16:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:34.032 16:16:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:34.032 INFO: launching applications... 00:05:34.032 16:16:42 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.032 16:16:42 -- json_config/common.sh@9 -- # local app=target 00:05:34.032 16:16:42 -- json_config/common.sh@10 -- # shift 00:05:34.032 16:16:42 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.032 16:16:42 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.032 16:16:42 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.032 16:16:42 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.032 16:16:42 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.032 16:16:42 -- json_config/common.sh@22 -- # app_pid["$app"]=333842 00:05:34.032 16:16:42 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.032 Waiting for target to run... 00:05:34.032 16:16:42 -- json_config/common.sh@25 -- # waitforlisten 333842 /var/tmp/spdk_tgt.sock 00:05:34.032 16:16:42 -- common/autotest_common.sh@817 -- # '[' -z 333842 ']' 00:05:34.032 16:16:42 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:34.032 16:16:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.032 16:16:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.032 16:16:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.032 16:16:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.032 16:16:42 -- common/autotest_common.sh@10 -- # set +x 00:05:34.032 [2024-04-26 16:16:43.013623] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:34.032 [2024-04-26 16:16:43.013686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid333842 ] 00:05:34.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.600 [2024-04-26 16:16:43.504016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.600 [2024-04-26 16:16:43.592430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.859 16:16:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:34.859 16:16:43 -- common/autotest_common.sh@850 -- # return 0 00:05:34.859 16:16:43 -- json_config/common.sh@26 -- # echo '' 00:05:34.859 00:05:34.859 16:16:43 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:34.859 INFO: shutting down applications... 00:05:34.859 16:16:43 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:34.859 16:16:43 -- json_config/common.sh@31 -- # local app=target 00:05:34.859 16:16:43 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:34.859 16:16:43 -- json_config/common.sh@35 -- # [[ -n 333842 ]] 00:05:34.859 16:16:43 -- json_config/common.sh@38 -- # kill -SIGINT 333842 00:05:34.859 16:16:43 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:34.859 16:16:43 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.859 16:16:43 -- json_config/common.sh@41 -- # kill -0 333842 00:05:34.859 16:16:43 -- json_config/common.sh@45 -- # sleep 0.5 00:05:35.429 16:16:44 -- json_config/common.sh@40 -- # (( i++ )) 00:05:35.429 16:16:44 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.429 16:16:44 -- json_config/common.sh@41 -- # kill -0 333842 00:05:35.429 16:16:44 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:35.429 16:16:44 -- json_config/common.sh@43 -- # break 00:05:35.429 16:16:44 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:35.429 16:16:44 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:35.429 SPDK target shutdown done 00:05:35.429 16:16:44 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:35.429 Success 00:05:35.429 00:05:35.429 real 0m1.473s 00:05:35.429 user 0m1.051s 00:05:35.429 sys 0m0.623s 00:05:35.429 16:16:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.429 16:16:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.429 ************************************ 00:05:35.429 END TEST json_config_extra_key 00:05:35.429 ************************************ 00:05:35.429 16:16:44 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.429 16:16:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.429 16:16:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.429 16:16:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.690 ************************************ 00:05:35.690 START TEST alias_rpc 00:05:35.690 ************************************ 00:05:35.690 16:16:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:35.690 * Looking for test storage... 00:05:35.690 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:35.690 16:16:44 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.690 16:16:44 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=334087 00:05:35.690 16:16:44 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.690 16:16:44 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 334087 00:05:35.690 16:16:44 -- common/autotest_common.sh@817 -- # '[' -z 334087 ']' 00:05:35.690 16:16:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.690 16:16:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:35.690 16:16:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.690 16:16:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:35.690 16:16:44 -- common/autotest_common.sh@10 -- # set +x 00:05:35.690 [2024-04-26 16:16:44.686774] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:35.690 [2024-04-26 16:16:44.686828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334087 ] 00:05:35.950 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.950 [2024-04-26 16:16:44.758251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.950 [2024-04-26 16:16:44.833001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.521 16:16:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:36.521 16:16:45 -- common/autotest_common.sh@850 -- # return 0 00:05:36.521 16:16:45 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:36.780 16:16:45 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 334087 00:05:36.781 16:16:45 -- common/autotest_common.sh@936 -- # '[' -z 334087 ']' 00:05:36.781 16:16:45 -- common/autotest_common.sh@940 -- # kill -0 334087 00:05:36.781 16:16:45 -- common/autotest_common.sh@941 -- # uname 00:05:36.781 16:16:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:36.781 16:16:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 334087 00:05:36.781 16:16:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:36.781 16:16:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:36.781 16:16:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 334087' 00:05:36.781 killing process with pid 334087 00:05:36.781 16:16:45 -- common/autotest_common.sh@955 -- # kill 334087 00:05:36.781 16:16:45 -- common/autotest_common.sh@960 -- # wait 334087 00:05:37.351 00:05:37.351 real 0m1.549s 00:05:37.351 user 0m1.627s 00:05:37.351 sys 0m0.458s 00:05:37.351 16:16:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.351 16:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:37.351 ************************************ 00:05:37.351 END TEST alias_rpc 00:05:37.351 ************************************ 00:05:37.351 16:16:46 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:37.351 16:16:46 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.351 16:16:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.351 16:16:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.351 16:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:37.351 ************************************ 00:05:37.351 START TEST spdkcli_tcp 00:05:37.351 ************************************ 00:05:37.351 16:16:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:37.611 * Looking for test storage... 00:05:37.611 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:37.611 16:16:46 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:37.611 16:16:46 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:37.611 16:16:46 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:37.611 16:16:46 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.611 16:16:46 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.611 16:16:46 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.611 16:16:46 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.611 16:16:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:37.611 16:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:37.611 16:16:46 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=334348 00:05:37.611 16:16:46 -- spdkcli/tcp.sh@27 -- # waitforlisten 334348 00:05:37.611 16:16:46 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.611 16:16:46 -- common/autotest_common.sh@817 -- # '[' -z 334348 ']' 00:05:37.611 16:16:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.611 16:16:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:37.611 16:16:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.611 16:16:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:37.611 16:16:46 -- common/autotest_common.sh@10 -- # set +x 00:05:37.611 [2024-04-26 16:16:46.456921] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:37.611 [2024-04-26 16:16:46.456986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334348 ] 00:05:37.611 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.611 [2024-04-26 16:16:46.526641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.611 [2024-04-26 16:16:46.609988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.611 [2024-04-26 16:16:46.609991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.552 16:16:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:38.552 16:16:47 -- common/autotest_common.sh@850 -- # return 0 00:05:38.552 16:16:47 -- spdkcli/tcp.sh@31 -- # socat_pid=334515 00:05:38.552 16:16:47 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:38.552 16:16:47 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:38.552 [ 00:05:38.552 "bdev_malloc_delete", 00:05:38.552 "bdev_malloc_create", 00:05:38.553 "bdev_null_resize", 00:05:38.553 "bdev_null_delete", 00:05:38.553 "bdev_null_create", 00:05:38.553 "bdev_nvme_cuse_unregister", 00:05:38.553 "bdev_nvme_cuse_register", 00:05:38.553 "bdev_opal_new_user", 00:05:38.553 "bdev_opal_set_lock_state", 00:05:38.553 "bdev_opal_delete", 00:05:38.553 "bdev_opal_get_info", 00:05:38.553 "bdev_opal_create", 00:05:38.553 "bdev_nvme_opal_revert", 00:05:38.553 "bdev_nvme_opal_init", 00:05:38.553 "bdev_nvme_send_cmd", 00:05:38.553 "bdev_nvme_get_path_iostat", 00:05:38.553 "bdev_nvme_get_mdns_discovery_info", 00:05:38.553 "bdev_nvme_stop_mdns_discovery", 00:05:38.553 "bdev_nvme_start_mdns_discovery", 00:05:38.553 "bdev_nvme_set_multipath_policy", 00:05:38.553 "bdev_nvme_set_preferred_path", 00:05:38.553 "bdev_nvme_get_io_paths", 00:05:38.553 "bdev_nvme_remove_error_injection", 00:05:38.553 "bdev_nvme_add_error_injection", 00:05:38.553 "bdev_nvme_get_discovery_info", 00:05:38.553 "bdev_nvme_stop_discovery", 00:05:38.553 "bdev_nvme_start_discovery", 00:05:38.553 "bdev_nvme_get_controller_health_info", 00:05:38.553 "bdev_nvme_disable_controller", 00:05:38.553 "bdev_nvme_enable_controller", 00:05:38.553 "bdev_nvme_reset_controller", 00:05:38.553 "bdev_nvme_get_transport_statistics", 00:05:38.553 "bdev_nvme_apply_firmware", 00:05:38.553 "bdev_nvme_detach_controller", 00:05:38.553 "bdev_nvme_get_controllers", 00:05:38.553 "bdev_nvme_attach_controller", 00:05:38.553 "bdev_nvme_set_hotplug", 00:05:38.553 "bdev_nvme_set_options", 00:05:38.553 "bdev_passthru_delete", 00:05:38.553 "bdev_passthru_create", 00:05:38.553 "bdev_lvol_grow_lvstore", 00:05:38.553 "bdev_lvol_get_lvols", 00:05:38.553 "bdev_lvol_get_lvstores", 00:05:38.553 "bdev_lvol_delete", 00:05:38.553 "bdev_lvol_set_read_only", 00:05:38.553 "bdev_lvol_resize", 00:05:38.553 "bdev_lvol_decouple_parent", 00:05:38.553 "bdev_lvol_inflate", 00:05:38.553 "bdev_lvol_rename", 00:05:38.553 "bdev_lvol_clone_bdev", 00:05:38.553 "bdev_lvol_clone", 00:05:38.553 "bdev_lvol_snapshot", 00:05:38.553 "bdev_lvol_create", 00:05:38.553 "bdev_lvol_delete_lvstore", 00:05:38.553 "bdev_lvol_rename_lvstore", 00:05:38.553 "bdev_lvol_create_lvstore", 00:05:38.553 "bdev_raid_set_options", 00:05:38.553 "bdev_raid_remove_base_bdev", 00:05:38.553 "bdev_raid_add_base_bdev", 00:05:38.553 "bdev_raid_delete", 00:05:38.553 "bdev_raid_create", 00:05:38.553 "bdev_raid_get_bdevs", 00:05:38.553 "bdev_error_inject_error", 00:05:38.553 "bdev_error_delete", 00:05:38.553 "bdev_error_create", 00:05:38.553 "bdev_split_delete", 00:05:38.553 "bdev_split_create", 00:05:38.553 "bdev_delay_delete", 00:05:38.553 "bdev_delay_create", 00:05:38.553 "bdev_delay_update_latency", 00:05:38.553 "bdev_zone_block_delete", 00:05:38.553 "bdev_zone_block_create", 00:05:38.553 "blobfs_create", 00:05:38.553 "blobfs_detect", 00:05:38.553 "blobfs_set_cache_size", 00:05:38.553 "bdev_aio_delete", 00:05:38.553 "bdev_aio_rescan", 00:05:38.553 "bdev_aio_create", 00:05:38.553 "bdev_ftl_set_property", 00:05:38.553 "bdev_ftl_get_properties", 00:05:38.553 "bdev_ftl_get_stats", 00:05:38.553 "bdev_ftl_unmap", 00:05:38.553 "bdev_ftl_unload", 00:05:38.553 "bdev_ftl_delete", 00:05:38.553 "bdev_ftl_load", 00:05:38.553 "bdev_ftl_create", 00:05:38.553 "bdev_virtio_attach_controller", 00:05:38.553 "bdev_virtio_scsi_get_devices", 00:05:38.553 "bdev_virtio_detach_controller", 00:05:38.553 "bdev_virtio_blk_set_hotplug", 00:05:38.553 "bdev_iscsi_delete", 00:05:38.553 "bdev_iscsi_create", 00:05:38.553 "bdev_iscsi_set_options", 00:05:38.553 "accel_error_inject_error", 00:05:38.553 "ioat_scan_accel_module", 00:05:38.553 "dsa_scan_accel_module", 00:05:38.553 "iaa_scan_accel_module", 00:05:38.553 "keyring_file_remove_key", 00:05:38.553 "keyring_file_add_key", 00:05:38.553 "iscsi_get_histogram", 00:05:38.553 "iscsi_enable_histogram", 00:05:38.553 "iscsi_set_options", 00:05:38.553 "iscsi_get_auth_groups", 00:05:38.553 "iscsi_auth_group_remove_secret", 00:05:38.553 "iscsi_auth_group_add_secret", 00:05:38.553 "iscsi_delete_auth_group", 00:05:38.553 "iscsi_create_auth_group", 00:05:38.553 "iscsi_set_discovery_auth", 00:05:38.553 "iscsi_get_options", 00:05:38.553 "iscsi_target_node_request_logout", 00:05:38.553 "iscsi_target_node_set_redirect", 00:05:38.553 "iscsi_target_node_set_auth", 00:05:38.553 "iscsi_target_node_add_lun", 00:05:38.553 "iscsi_get_stats", 00:05:38.553 "iscsi_get_connections", 00:05:38.553 "iscsi_portal_group_set_auth", 00:05:38.553 "iscsi_start_portal_group", 00:05:38.553 "iscsi_delete_portal_group", 00:05:38.553 "iscsi_create_portal_group", 00:05:38.553 "iscsi_get_portal_groups", 00:05:38.553 "iscsi_delete_target_node", 00:05:38.553 "iscsi_target_node_remove_pg_ig_maps", 00:05:38.553 "iscsi_target_node_add_pg_ig_maps", 00:05:38.553 "iscsi_create_target_node", 00:05:38.553 "iscsi_get_target_nodes", 00:05:38.553 "iscsi_delete_initiator_group", 00:05:38.553 "iscsi_initiator_group_remove_initiators", 00:05:38.553 "iscsi_initiator_group_add_initiators", 00:05:38.553 "iscsi_create_initiator_group", 00:05:38.553 "iscsi_get_initiator_groups", 00:05:38.553 "nvmf_set_crdt", 00:05:38.553 "nvmf_set_config", 00:05:38.553 "nvmf_set_max_subsystems", 00:05:38.553 "nvmf_subsystem_get_listeners", 00:05:38.553 "nvmf_subsystem_get_qpairs", 00:05:38.553 "nvmf_subsystem_get_controllers", 00:05:38.553 "nvmf_get_stats", 00:05:38.553 "nvmf_get_transports", 00:05:38.553 "nvmf_create_transport", 00:05:38.553 "nvmf_get_targets", 00:05:38.553 "nvmf_delete_target", 00:05:38.553 "nvmf_create_target", 00:05:38.553 "nvmf_subsystem_allow_any_host", 00:05:38.553 "nvmf_subsystem_remove_host", 00:05:38.553 "nvmf_subsystem_add_host", 00:05:38.553 "nvmf_ns_remove_host", 00:05:38.553 "nvmf_ns_add_host", 00:05:38.553 "nvmf_subsystem_remove_ns", 00:05:38.553 "nvmf_subsystem_add_ns", 00:05:38.553 "nvmf_subsystem_listener_set_ana_state", 00:05:38.553 "nvmf_discovery_get_referrals", 00:05:38.553 "nvmf_discovery_remove_referral", 00:05:38.553 "nvmf_discovery_add_referral", 00:05:38.553 "nvmf_subsystem_remove_listener", 00:05:38.553 "nvmf_subsystem_add_listener", 00:05:38.553 "nvmf_delete_subsystem", 00:05:38.553 "nvmf_create_subsystem", 00:05:38.553 "nvmf_get_subsystems", 00:05:38.553 "env_dpdk_get_mem_stats", 00:05:38.553 "nbd_get_disks", 00:05:38.553 "nbd_stop_disk", 00:05:38.553 "nbd_start_disk", 00:05:38.553 "ublk_recover_disk", 00:05:38.553 "ublk_get_disks", 00:05:38.553 "ublk_stop_disk", 00:05:38.553 "ublk_start_disk", 00:05:38.553 "ublk_destroy_target", 00:05:38.553 "ublk_create_target", 00:05:38.553 "virtio_blk_create_transport", 00:05:38.553 "virtio_blk_get_transports", 00:05:38.553 "vhost_controller_set_coalescing", 00:05:38.553 "vhost_get_controllers", 00:05:38.553 "vhost_delete_controller", 00:05:38.553 "vhost_create_blk_controller", 00:05:38.553 "vhost_scsi_controller_remove_target", 00:05:38.553 "vhost_scsi_controller_add_target", 00:05:38.553 "vhost_start_scsi_controller", 00:05:38.553 "vhost_create_scsi_controller", 00:05:38.553 "thread_set_cpumask", 00:05:38.553 "framework_get_scheduler", 00:05:38.553 "framework_set_scheduler", 00:05:38.553 "framework_get_reactors", 00:05:38.553 "thread_get_io_channels", 00:05:38.553 "thread_get_pollers", 00:05:38.553 "thread_get_stats", 00:05:38.553 "framework_monitor_context_switch", 00:05:38.553 "spdk_kill_instance", 00:05:38.553 "log_enable_timestamps", 00:05:38.553 "log_get_flags", 00:05:38.553 "log_clear_flag", 00:05:38.553 "log_set_flag", 00:05:38.553 "log_get_level", 00:05:38.553 "log_set_level", 00:05:38.553 "log_get_print_level", 00:05:38.553 "log_set_print_level", 00:05:38.553 "framework_enable_cpumask_locks", 00:05:38.553 "framework_disable_cpumask_locks", 00:05:38.553 "framework_wait_init", 00:05:38.553 "framework_start_init", 00:05:38.553 "scsi_get_devices", 00:05:38.553 "bdev_get_histogram", 00:05:38.553 "bdev_enable_histogram", 00:05:38.553 "bdev_set_qos_limit", 00:05:38.553 "bdev_set_qd_sampling_period", 00:05:38.553 "bdev_get_bdevs", 00:05:38.553 "bdev_reset_iostat", 00:05:38.553 "bdev_get_iostat", 00:05:38.553 "bdev_examine", 00:05:38.553 "bdev_wait_for_examine", 00:05:38.553 "bdev_set_options", 00:05:38.553 "notify_get_notifications", 00:05:38.553 "notify_get_types", 00:05:38.553 "accel_get_stats", 00:05:38.553 "accel_set_options", 00:05:38.553 "accel_set_driver", 00:05:38.553 "accel_crypto_key_destroy", 00:05:38.553 "accel_crypto_keys_get", 00:05:38.553 "accel_crypto_key_create", 00:05:38.553 "accel_assign_opc", 00:05:38.553 "accel_get_module_info", 00:05:38.553 "accel_get_opc_assignments", 00:05:38.553 "vmd_rescan", 00:05:38.553 "vmd_remove_device", 00:05:38.553 "vmd_enable", 00:05:38.553 "sock_set_default_impl", 00:05:38.553 "sock_impl_set_options", 00:05:38.553 "sock_impl_get_options", 00:05:38.553 "iobuf_get_stats", 00:05:38.553 "iobuf_set_options", 00:05:38.553 "framework_get_pci_devices", 00:05:38.553 "framework_get_config", 00:05:38.553 "framework_get_subsystems", 00:05:38.553 "trace_get_info", 00:05:38.553 "trace_get_tpoint_group_mask", 00:05:38.553 "trace_disable_tpoint_group", 00:05:38.553 "trace_enable_tpoint_group", 00:05:38.553 "trace_clear_tpoint_mask", 00:05:38.553 "trace_set_tpoint_mask", 00:05:38.553 "keyring_get_keys", 00:05:38.553 "spdk_get_version", 00:05:38.553 "rpc_get_methods" 00:05:38.553 ] 00:05:38.553 16:16:47 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:38.553 16:16:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:38.553 16:16:47 -- common/autotest_common.sh@10 -- # set +x 00:05:38.553 16:16:47 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:38.553 16:16:47 -- spdkcli/tcp.sh@38 -- # killprocess 334348 00:05:38.553 16:16:47 -- common/autotest_common.sh@936 -- # '[' -z 334348 ']' 00:05:38.553 16:16:47 -- common/autotest_common.sh@940 -- # kill -0 334348 00:05:38.553 16:16:47 -- common/autotest_common.sh@941 -- # uname 00:05:38.553 16:16:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.553 16:16:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 334348 00:05:38.553 16:16:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.553 16:16:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.553 16:16:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 334348' 00:05:38.553 killing process with pid 334348 00:05:38.553 16:16:47 -- common/autotest_common.sh@955 -- # kill 334348 00:05:38.553 16:16:47 -- common/autotest_common.sh@960 -- # wait 334348 00:05:39.124 00:05:39.124 real 0m1.590s 00:05:39.124 user 0m2.857s 00:05:39.124 sys 0m0.503s 00:05:39.124 16:16:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.124 16:16:47 -- common/autotest_common.sh@10 -- # set +x 00:05:39.124 ************************************ 00:05:39.124 END TEST spdkcli_tcp 00:05:39.124 ************************************ 00:05:39.124 16:16:47 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.124 16:16:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.124 16:16:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.124 16:16:47 -- common/autotest_common.sh@10 -- # set +x 00:05:39.124 ************************************ 00:05:39.124 START TEST dpdk_mem_utility 00:05:39.124 ************************************ 00:05:39.124 16:16:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:39.385 * Looking for test storage... 00:05:39.385 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:39.385 16:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.385 16:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=334762 00:05:39.385 16:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 334762 00:05:39.385 16:16:48 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:39.385 16:16:48 -- common/autotest_common.sh@817 -- # '[' -z 334762 ']' 00:05:39.385 16:16:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.385 16:16:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.385 16:16:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.385 16:16:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.385 16:16:48 -- common/autotest_common.sh@10 -- # set +x 00:05:39.385 [2024-04-26 16:16:48.209955] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:39.385 [2024-04-26 16:16:48.210009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid334762 ] 00:05:39.385 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.385 [2024-04-26 16:16:48.281262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.385 [2024-04-26 16:16:48.363062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.326 16:16:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.326 16:16:49 -- common/autotest_common.sh@850 -- # return 0 00:05:40.326 16:16:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:40.326 16:16:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:40.326 16:16:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.326 16:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:40.326 { 00:05:40.326 "filename": "/tmp/spdk_mem_dump.txt" 00:05:40.326 } 00:05:40.326 16:16:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:40.326 16:16:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:40.326 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:40.326 1 heaps totaling size 814.000000 MiB 00:05:40.326 size: 814.000000 MiB heap id: 0 00:05:40.326 end heaps---------- 00:05:40.326 8 mempools totaling size 598.116089 MiB 00:05:40.326 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:40.326 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:40.326 size: 84.521057 MiB name: bdev_io_334762 00:05:40.326 size: 51.011292 MiB name: evtpool_334762 00:05:40.326 size: 50.003479 MiB name: msgpool_334762 00:05:40.326 size: 21.763794 MiB name: PDU_Pool 00:05:40.326 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:40.326 size: 0.026123 MiB name: Session_Pool 00:05:40.326 end mempools------- 00:05:40.326 6 memzones totaling size 4.142822 MiB 00:05:40.326 size: 1.000366 MiB name: RG_ring_0_334762 00:05:40.326 size: 1.000366 MiB name: RG_ring_1_334762 00:05:40.326 size: 1.000366 MiB name: RG_ring_4_334762 00:05:40.326 size: 1.000366 MiB name: RG_ring_5_334762 00:05:40.326 size: 0.125366 MiB name: RG_ring_2_334762 00:05:40.326 size: 0.015991 MiB name: RG_ring_3_334762 00:05:40.326 end memzones------- 00:05:40.326 16:16:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:40.326 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:40.326 list of free elements. size: 12.519348 MiB 00:05:40.326 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:40.326 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:40.326 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:40.326 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:40.326 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:40.326 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:40.326 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:40.326 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:40.326 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:40.326 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:40.326 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:40.326 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:40.326 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:40.326 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:40.326 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:40.326 list of standard malloc elements. size: 199.218079 MiB 00:05:40.326 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:40.326 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:40.326 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:40.326 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:40.326 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:40.326 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:40.326 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:40.326 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:40.326 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:40.326 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:40.326 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:40.326 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:40.326 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:40.326 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:40.326 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:40.326 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:40.326 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:40.326 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:40.326 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:40.326 list of memzone associated elements. size: 602.262573 MiB 00:05:40.326 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:40.326 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:40.326 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:40.326 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:40.326 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:40.326 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_334762_0 00:05:40.326 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:40.326 associated memzone info: size: 48.002930 MiB name: MP_evtpool_334762_0 00:05:40.326 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:40.326 associated memzone info: size: 48.002930 MiB name: MP_msgpool_334762_0 00:05:40.326 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:40.326 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:40.326 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:40.326 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:40.326 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:40.326 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_334762 00:05:40.326 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:40.326 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_334762 00:05:40.326 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:40.326 associated memzone info: size: 1.007996 MiB name: MP_evtpool_334762 00:05:40.326 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:40.326 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:40.326 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:40.326 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:40.326 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:40.326 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:40.326 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:40.326 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:40.326 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:40.326 associated memzone info: size: 1.000366 MiB name: RG_ring_0_334762 00:05:40.326 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:40.326 associated memzone info: size: 1.000366 MiB name: RG_ring_1_334762 00:05:40.326 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:40.326 associated memzone info: size: 1.000366 MiB name: RG_ring_4_334762 00:05:40.326 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:40.326 associated memzone info: size: 1.000366 MiB name: RG_ring_5_334762 00:05:40.326 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:40.326 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_334762 00:05:40.326 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:40.326 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:40.326 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:40.326 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:40.326 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:40.327 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:40.327 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:40.327 associated memzone info: size: 0.125366 MiB name: RG_ring_2_334762 00:05:40.327 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:40.327 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:40.327 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:40.327 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:40.327 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:40.327 associated memzone info: size: 0.015991 MiB name: RG_ring_3_334762 00:05:40.327 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:40.327 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:40.327 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:40.327 associated memzone info: size: 0.000183 MiB name: MP_msgpool_334762 00:05:40.327 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:40.327 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_334762 00:05:40.327 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:40.327 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:40.327 16:16:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:40.327 16:16:49 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 334762 00:05:40.327 16:16:49 -- common/autotest_common.sh@936 -- # '[' -z 334762 ']' 00:05:40.327 16:16:49 -- common/autotest_common.sh@940 -- # kill -0 334762 00:05:40.327 16:16:49 -- common/autotest_common.sh@941 -- # uname 00:05:40.327 16:16:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.327 16:16:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 334762 00:05:40.327 16:16:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.327 16:16:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.327 16:16:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 334762' 00:05:40.327 killing process with pid 334762 00:05:40.327 16:16:49 -- common/autotest_common.sh@955 -- # kill 334762 00:05:40.327 16:16:49 -- common/autotest_common.sh@960 -- # wait 334762 00:05:40.588 00:05:40.588 real 0m1.466s 00:05:40.588 user 0m1.462s 00:05:40.588 sys 0m0.469s 00:05:40.588 16:16:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.588 16:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:40.588 ************************************ 00:05:40.588 END TEST dpdk_mem_utility 00:05:40.588 ************************************ 00:05:40.588 16:16:49 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:40.588 16:16:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.588 16:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.588 16:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:40.848 ************************************ 00:05:40.848 START TEST event 00:05:40.848 ************************************ 00:05:40.848 16:16:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:40.848 * Looking for test storage... 00:05:40.848 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:40.848 16:16:49 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:40.848 16:16:49 -- bdev/nbd_common.sh@6 -- # set -e 00:05:40.848 16:16:49 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:40.848 16:16:49 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:40.848 16:16:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.848 16:16:49 -- common/autotest_common.sh@10 -- # set +x 00:05:41.108 ************************************ 00:05:41.108 START TEST event_perf 00:05:41.108 ************************************ 00:05:41.108 16:16:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.108 Running I/O for 1 seconds...[2024-04-26 16:16:50.001386] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:41.108 [2024-04-26 16:16:50.001451] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335026 ] 00:05:41.108 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.108 [2024-04-26 16:16:50.082400] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.367 [2024-04-26 16:16:50.167040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.367 [2024-04-26 16:16:50.167057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.367 [2024-04-26 16:16:50.167137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.367 [2024-04-26 16:16:50.167138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.305 Running I/O for 1 seconds... 00:05:42.305 lcore 0: 204018 00:05:42.305 lcore 1: 204017 00:05:42.305 lcore 2: 204019 00:05:42.305 lcore 3: 204019 00:05:42.305 done. 00:05:42.305 00:05:42.305 real 0m1.276s 00:05:42.305 user 0m4.170s 00:05:42.305 sys 0m0.100s 00:05:42.305 16:16:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.305 16:16:51 -- common/autotest_common.sh@10 -- # set +x 00:05:42.305 ************************************ 00:05:42.305 END TEST event_perf 00:05:42.305 ************************************ 00:05:42.305 16:16:51 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.305 16:16:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:42.305 16:16:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.305 16:16:51 -- common/autotest_common.sh@10 -- # set +x 00:05:42.566 ************************************ 00:05:42.566 START TEST event_reactor 00:05:42.566 ************************************ 00:05:42.566 16:16:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:42.566 [2024-04-26 16:16:51.462660] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:42.566 [2024-04-26 16:16:51.462721] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335238 ] 00:05:42.566 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.566 [2024-04-26 16:16:51.535813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.825 [2024-04-26 16:16:51.616361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.764 test_start 00:05:43.764 oneshot 00:05:43.764 tick 100 00:05:43.764 tick 100 00:05:43.764 tick 250 00:05:43.764 tick 100 00:05:43.764 tick 100 00:05:43.764 tick 100 00:05:43.764 tick 250 00:05:43.764 tick 500 00:05:43.764 tick 100 00:05:43.764 tick 100 00:05:43.764 tick 250 00:05:43.764 tick 100 00:05:43.764 tick 100 00:05:43.764 test_end 00:05:43.764 00:05:43.764 real 0m1.265s 00:05:43.764 user 0m1.166s 00:05:43.764 sys 0m0.094s 00:05:43.764 16:16:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.764 16:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:43.764 ************************************ 00:05:43.764 END TEST event_reactor 00:05:43.764 ************************************ 00:05:43.764 16:16:52 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:43.764 16:16:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:43.764 16:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.764 16:16:52 -- common/autotest_common.sh@10 -- # set +x 00:05:44.024 ************************************ 00:05:44.024 START TEST event_reactor_perf 00:05:44.024 ************************************ 00:05:44.024 16:16:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.024 [2024-04-26 16:16:52.913152] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:44.024 [2024-04-26 16:16:52.913230] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335450 ] 00:05:44.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.024 [2024-04-26 16:16:52.987323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.284 [2024-04-26 16:16:53.067412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.223 test_start 00:05:45.223 test_end 00:05:45.223 Performance: 505513 events per second 00:05:45.223 00:05:45.223 real 0m1.268s 00:05:45.223 user 0m1.167s 00:05:45.223 sys 0m0.097s 00:05:45.223 16:16:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.223 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.223 ************************************ 00:05:45.223 END TEST event_reactor_perf 00:05:45.223 ************************************ 00:05:45.223 16:16:54 -- event/event.sh@49 -- # uname -s 00:05:45.223 16:16:54 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:45.223 16:16:54 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.223 16:16:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.223 16:16:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.223 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.484 ************************************ 00:05:45.484 START TEST event_scheduler 00:05:45.484 ************************************ 00:05:45.484 16:16:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:45.484 * Looking for test storage... 00:05:45.484 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:45.484 16:16:54 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:45.484 16:16:54 -- scheduler/scheduler.sh@35 -- # scheduler_pid=335683 00:05:45.484 16:16:54 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.484 16:16:54 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:45.484 16:16:54 -- scheduler/scheduler.sh@37 -- # waitforlisten 335683 00:05:45.484 16:16:54 -- common/autotest_common.sh@817 -- # '[' -z 335683 ']' 00:05:45.484 16:16:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.484 16:16:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.484 16:16:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.484 16:16:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.484 16:16:54 -- common/autotest_common.sh@10 -- # set +x 00:05:45.484 [2024-04-26 16:16:54.490136] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:45.484 [2024-04-26 16:16:54.490190] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335683 ] 00:05:45.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.744 [2024-04-26 16:16:54.561986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.744 [2024-04-26 16:16:54.644247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.744 [2024-04-26 16:16:54.644322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.744 [2024-04-26 16:16:54.644402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.744 [2024-04-26 16:16:54.644405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.312 16:16:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.312 16:16:55 -- common/autotest_common.sh@850 -- # return 0 00:05:46.313 16:16:55 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:46.313 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.313 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.313 POWER: Env isn't set yet! 00:05:46.313 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:46.313 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:46.313 POWER: Cannot set governor of lcore 0 to userspace 00:05:46.313 POWER: Attempting to initialise PSTAT power management... 00:05:46.313 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:46.313 POWER: Initialized successfully for lcore 0 power management 00:05:46.572 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:46.572 POWER: Initialized successfully for lcore 1 power management 00:05:46.572 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:46.572 POWER: Initialized successfully for lcore 2 power management 00:05:46.572 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:46.572 POWER: Initialized successfully for lcore 3 power management 00:05:46.572 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.572 16:16:55 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:46.572 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.572 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.572 [2024-04-26 16:16:55.426936] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:46.572 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.572 16:16:55 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:46.572 16:16:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:46.573 16:16:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:46.573 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.573 ************************************ 00:05:46.573 START TEST scheduler_create_thread 00:05:46.573 ************************************ 00:05:46.573 16:16:55 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:46.573 16:16:55 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:46.573 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.573 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.832 2 00:05:46.832 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.832 16:16:55 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:46.832 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.832 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.832 3 00:05:46.832 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.832 16:16:55 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 4 00:05:46.833 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 5 00:05:46.833 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 6 00:05:46.833 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 7 00:05:46.833 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 8 00:05:46.833 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 9 00:05:46.833 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 10 00:05:46.833 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 16:16:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:46.833 16:16:55 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:46.833 16:16:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:46.833 16:16:55 -- common/autotest_common.sh@10 -- # set +x 00:05:47.401 16:16:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:47.401 16:16:56 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:47.401 16:16:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:47.401 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:05:48.779 16:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:48.779 16:16:57 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:48.780 16:16:57 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:48.780 16:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:48.780 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:05:49.717 16:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:49.717 00:05:49.717 real 0m3.100s 00:05:49.717 user 0m0.019s 00:05:49.717 sys 0m0.010s 00:05:49.717 16:16:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.717 16:16:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.717 ************************************ 00:05:49.717 END TEST scheduler_create_thread 00:05:49.717 ************************************ 00:05:49.717 16:16:58 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:49.718 16:16:58 -- scheduler/scheduler.sh@46 -- # killprocess 335683 00:05:49.718 16:16:58 -- common/autotest_common.sh@936 -- # '[' -z 335683 ']' 00:05:49.718 16:16:58 -- common/autotest_common.sh@940 -- # kill -0 335683 00:05:49.718 16:16:58 -- common/autotest_common.sh@941 -- # uname 00:05:49.718 16:16:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:49.718 16:16:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 335683 00:05:49.977 16:16:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:49.977 16:16:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:49.977 16:16:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 335683' 00:05:49.977 killing process with pid 335683 00:05:49.977 16:16:58 -- common/autotest_common.sh@955 -- # kill 335683 00:05:49.977 16:16:58 -- common/autotest_common.sh@960 -- # wait 335683 00:05:50.237 [2024-04-26 16:16:59.066596] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:50.237 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:50.237 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:50.237 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:50.237 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:50.237 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:50.237 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:50.237 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:50.237 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:50.497 00:05:50.497 real 0m4.964s 00:05:50.497 user 0m9.605s 00:05:50.497 sys 0m0.514s 00:05:50.497 16:16:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.497 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:05:50.497 ************************************ 00:05:50.497 END TEST event_scheduler 00:05:50.497 ************************************ 00:05:50.497 16:16:59 -- event/event.sh@51 -- # modprobe -n nbd 00:05:50.497 16:16:59 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:50.497 16:16:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.497 16:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.497 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:05:50.757 ************************************ 00:05:50.757 START TEST app_repeat 00:05:50.757 ************************************ 00:05:50.757 16:16:59 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:50.757 16:16:59 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.757 16:16:59 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.757 16:16:59 -- event/event.sh@13 -- # local nbd_list 00:05:50.757 16:16:59 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.757 16:16:59 -- event/event.sh@14 -- # local bdev_list 00:05:50.757 16:16:59 -- event/event.sh@15 -- # local repeat_times=4 00:05:50.757 16:16:59 -- event/event.sh@17 -- # modprobe nbd 00:05:50.757 16:16:59 -- event/event.sh@19 -- # repeat_pid=336477 00:05:50.757 16:16:59 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:50.757 16:16:59 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:50.757 16:16:59 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 336477' 00:05:50.757 Process app_repeat pid: 336477 00:05:50.757 16:16:59 -- event/event.sh@23 -- # for i in {0..2} 00:05:50.757 16:16:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:50.757 spdk_app_start Round 0 00:05:50.757 16:16:59 -- event/event.sh@25 -- # waitforlisten 336477 /var/tmp/spdk-nbd.sock 00:05:50.757 16:16:59 -- common/autotest_common.sh@817 -- # '[' -z 336477 ']' 00:05:50.757 16:16:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.757 16:16:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.757 16:16:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.757 16:16:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.757 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:05:50.757 [2024-04-26 16:16:59.579741] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:05:50.757 [2024-04-26 16:16:59.579802] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336477 ] 00:05:50.757 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.757 [2024-04-26 16:16:59.648489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.757 [2024-04-26 16:16:59.734665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.757 [2024-04-26 16:16:59.734667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.693 16:17:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.693 16:17:00 -- common/autotest_common.sh@850 -- # return 0 00:05:51.693 16:17:00 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.693 Malloc0 00:05:51.693 16:17:00 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.952 Malloc1 00:05:51.952 16:17:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@12 -- # local i 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.952 /dev/nbd0 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.952 16:17:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.952 16:17:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:51.952 16:17:00 -- common/autotest_common.sh@855 -- # local i 00:05:51.952 16:17:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:51.952 16:17:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:51.952 16:17:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:52.211 16:17:00 -- common/autotest_common.sh@859 -- # break 00:05:52.211 16:17:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:52.211 16:17:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:52.211 16:17:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.211 1+0 records in 00:05:52.211 1+0 records out 00:05:52.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226955 s, 18.0 MB/s 00:05:52.211 16:17:00 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.211 16:17:00 -- common/autotest_common.sh@872 -- # size=4096 00:05:52.211 16:17:00 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.211 16:17:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:52.211 16:17:00 -- common/autotest_common.sh@875 -- # return 0 00:05:52.211 16:17:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.211 16:17:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.211 16:17:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:52.211 /dev/nbd1 00:05:52.211 16:17:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:52.211 16:17:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:52.211 16:17:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:52.211 16:17:01 -- common/autotest_common.sh@855 -- # local i 00:05:52.211 16:17:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:52.211 16:17:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:52.211 16:17:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:52.211 16:17:01 -- common/autotest_common.sh@859 -- # break 00:05:52.211 16:17:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:52.211 16:17:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:52.211 16:17:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:52.211 1+0 records in 00:05:52.211 1+0 records out 00:05:52.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288347 s, 14.2 MB/s 00:05:52.211 16:17:01 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.211 16:17:01 -- common/autotest_common.sh@872 -- # size=4096 00:05:52.211 16:17:01 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:52.211 16:17:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:52.211 16:17:01 -- common/autotest_common.sh@875 -- # return 0 00:05:52.211 16:17:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:52.211 16:17:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:52.211 16:17:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.211 16:17:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.211 16:17:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:52.470 { 00:05:52.470 "nbd_device": "/dev/nbd0", 00:05:52.470 "bdev_name": "Malloc0" 00:05:52.470 }, 00:05:52.470 { 00:05:52.470 "nbd_device": "/dev/nbd1", 00:05:52.470 "bdev_name": "Malloc1" 00:05:52.470 } 00:05:52.470 ]' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:52.470 { 00:05:52.470 "nbd_device": "/dev/nbd0", 00:05:52.470 "bdev_name": "Malloc0" 00:05:52.470 }, 00:05:52.470 { 00:05:52.470 "nbd_device": "/dev/nbd1", 00:05:52.470 "bdev_name": "Malloc1" 00:05:52.470 } 00:05:52.470 ]' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:52.470 /dev/nbd1' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:52.470 /dev/nbd1' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@65 -- # count=2 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@95 -- # count=2 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:52.470 256+0 records in 00:05:52.470 256+0 records out 00:05:52.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114321 s, 91.7 MB/s 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:52.470 256+0 records in 00:05:52.470 256+0 records out 00:05:52.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205027 s, 51.1 MB/s 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:52.470 256+0 records in 00:05:52.470 256+0 records out 00:05:52.470 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215874 s, 48.6 MB/s 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.470 16:17:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@51 -- # local i 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@41 -- # break 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.729 16:17:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@41 -- # break 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.988 16:17:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@65 -- # true 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@65 -- # count=0 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@104 -- # count=0 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:53.246 16:17:02 -- bdev/nbd_common.sh@109 -- # return 0 00:05:53.247 16:17:02 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:53.506 16:17:02 -- event/event.sh@35 -- # sleep 3 00:05:53.765 [2024-04-26 16:17:02.539084] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.765 [2024-04-26 16:17:02.615702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.765 [2024-04-26 16:17:02.615704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.765 [2024-04-26 16:17:02.664126] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.765 [2024-04-26 16:17:02.664178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:57.055 16:17:05 -- event/event.sh@23 -- # for i in {0..2} 00:05:57.056 16:17:05 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:57.056 spdk_app_start Round 1 00:05:57.056 16:17:05 -- event/event.sh@25 -- # waitforlisten 336477 /var/tmp/spdk-nbd.sock 00:05:57.056 16:17:05 -- common/autotest_common.sh@817 -- # '[' -z 336477 ']' 00:05:57.056 16:17:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.056 16:17:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.056 16:17:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.056 16:17:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.056 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.056 16:17:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.056 16:17:05 -- common/autotest_common.sh@850 -- # return 0 00:05:57.056 16:17:05 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.056 Malloc0 00:05:57.056 16:17:05 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.056 Malloc1 00:05:57.056 16:17:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@12 -- # local i 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.056 16:17:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.056 /dev/nbd0 00:05:57.056 16:17:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.056 16:17:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.056 16:17:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:57.056 16:17:06 -- common/autotest_common.sh@855 -- # local i 00:05:57.056 16:17:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:57.056 16:17:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:57.056 16:17:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:57.056 16:17:06 -- common/autotest_common.sh@859 -- # break 00:05:57.315 16:17:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:57.315 16:17:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:57.315 16:17:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.315 1+0 records in 00:05:57.315 1+0 records out 00:05:57.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248173 s, 16.5 MB/s 00:05:57.315 16:17:06 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.315 16:17:06 -- common/autotest_common.sh@872 -- # size=4096 00:05:57.315 16:17:06 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.315 16:17:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:57.315 16:17:06 -- common/autotest_common.sh@875 -- # return 0 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.315 /dev/nbd1 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.315 16:17:06 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:57.315 16:17:06 -- common/autotest_common.sh@855 -- # local i 00:05:57.315 16:17:06 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:57.315 16:17:06 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:57.315 16:17:06 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:57.315 16:17:06 -- common/autotest_common.sh@859 -- # break 00:05:57.315 16:17:06 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:57.315 16:17:06 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:57.315 16:17:06 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.315 1+0 records in 00:05:57.315 1+0 records out 00:05:57.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270335 s, 15.2 MB/s 00:05:57.315 16:17:06 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.315 16:17:06 -- common/autotest_common.sh@872 -- # size=4096 00:05:57.315 16:17:06 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:57.315 16:17:06 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:57.315 16:17:06 -- common/autotest_common.sh@875 -- # return 0 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.315 16:17:06 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.575 { 00:05:57.575 "nbd_device": "/dev/nbd0", 00:05:57.575 "bdev_name": "Malloc0" 00:05:57.575 }, 00:05:57.575 { 00:05:57.575 "nbd_device": "/dev/nbd1", 00:05:57.575 "bdev_name": "Malloc1" 00:05:57.575 } 00:05:57.575 ]' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.575 { 00:05:57.575 "nbd_device": "/dev/nbd0", 00:05:57.575 "bdev_name": "Malloc0" 00:05:57.575 }, 00:05:57.575 { 00:05:57.575 "nbd_device": "/dev/nbd1", 00:05:57.575 "bdev_name": "Malloc1" 00:05:57.575 } 00:05:57.575 ]' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.575 /dev/nbd1' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.575 /dev/nbd1' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.575 256+0 records in 00:05:57.575 256+0 records out 00:05:57.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011448 s, 91.6 MB/s 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.575 256+0 records in 00:05:57.575 256+0 records out 00:05:57.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205841 s, 50.9 MB/s 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.575 256+0 records in 00:05:57.575 256+0 records out 00:05:57.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216325 s, 48.5 MB/s 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.575 16:17:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@51 -- # local i 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@41 -- # break 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.834 16:17:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@41 -- # break 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.099 16:17:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.099 16:17:07 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@65 -- # true 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.360 16:17:07 -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.360 16:17:07 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.620 16:17:07 -- event/event.sh@35 -- # sleep 3 00:05:58.620 [2024-04-26 16:17:07.640381] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.879 [2024-04-26 16:17:07.721895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.879 [2024-04-26 16:17:07.721898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.879 [2024-04-26 16:17:07.770858] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:58.879 [2024-04-26 16:17:07.770910] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.417 16:17:10 -- event/event.sh@23 -- # for i in {0..2} 00:06:01.417 16:17:10 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:01.417 spdk_app_start Round 2 00:06:01.417 16:17:10 -- event/event.sh@25 -- # waitforlisten 336477 /var/tmp/spdk-nbd.sock 00:06:01.417 16:17:10 -- common/autotest_common.sh@817 -- # '[' -z 336477 ']' 00:06:01.417 16:17:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.417 16:17:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.417 16:17:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.417 16:17:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.417 16:17:10 -- common/autotest_common.sh@10 -- # set +x 00:06:01.676 16:17:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:01.676 16:17:10 -- common/autotest_common.sh@850 -- # return 0 00:06:01.676 16:17:10 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.936 Malloc0 00:06:01.936 16:17:10 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.936 Malloc1 00:06:02.257 16:17:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@12 -- # local i 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.257 16:17:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.257 /dev/nbd0 00:06:02.257 16:17:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.257 16:17:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.257 16:17:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:02.257 16:17:11 -- common/autotest_common.sh@855 -- # local i 00:06:02.257 16:17:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:02.257 16:17:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:02.257 16:17:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:02.257 16:17:11 -- common/autotest_common.sh@859 -- # break 00:06:02.257 16:17:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:02.257 16:17:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:02.257 16:17:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.257 1+0 records in 00:06:02.257 1+0 records out 00:06:02.257 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000138865 s, 29.5 MB/s 00:06:02.257 16:17:11 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.257 16:17:11 -- common/autotest_common.sh@872 -- # size=4096 00:06:02.257 16:17:11 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.257 16:17:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:02.257 16:17:11 -- common/autotest_common.sh@875 -- # return 0 00:06:02.257 16:17:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.257 16:17:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.257 16:17:11 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.538 /dev/nbd1 00:06:02.538 16:17:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.538 16:17:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.538 16:17:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:02.538 16:17:11 -- common/autotest_common.sh@855 -- # local i 00:06:02.538 16:17:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:02.538 16:17:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:02.538 16:17:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:02.538 16:17:11 -- common/autotest_common.sh@859 -- # break 00:06:02.538 16:17:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:02.538 16:17:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:02.538 16:17:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.538 1+0 records in 00:06:02.538 1+0 records out 00:06:02.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026083 s, 15.7 MB/s 00:06:02.538 16:17:11 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.538 16:17:11 -- common/autotest_common.sh@872 -- # size=4096 00:06:02.538 16:17:11 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:02.538 16:17:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:02.538 16:17:11 -- common/autotest_common.sh@875 -- # return 0 00:06:02.538 16:17:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.538 16:17:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.538 16:17:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.538 16:17:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.538 16:17:11 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.824 16:17:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.824 { 00:06:02.824 "nbd_device": "/dev/nbd0", 00:06:02.824 "bdev_name": "Malloc0" 00:06:02.824 }, 00:06:02.824 { 00:06:02.824 "nbd_device": "/dev/nbd1", 00:06:02.824 "bdev_name": "Malloc1" 00:06:02.824 } 00:06:02.824 ]' 00:06:02.824 16:17:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.824 { 00:06:02.824 "nbd_device": "/dev/nbd0", 00:06:02.824 "bdev_name": "Malloc0" 00:06:02.824 }, 00:06:02.824 { 00:06:02.825 "nbd_device": "/dev/nbd1", 00:06:02.825 "bdev_name": "Malloc1" 00:06:02.825 } 00:06:02.825 ]' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.825 /dev/nbd1' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.825 /dev/nbd1' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.825 256+0 records in 00:06:02.825 256+0 records out 00:06:02.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112023 s, 93.6 MB/s 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.825 256+0 records in 00:06:02.825 256+0 records out 00:06:02.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205097 s, 51.1 MB/s 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.825 256+0 records in 00:06:02.825 256+0 records out 00:06:02.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215983 s, 48.5 MB/s 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@51 -- # local i 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.825 16:17:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@41 -- # break 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.129 16:17:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@41 -- # break 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.129 16:17:12 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.421 16:17:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@65 -- # true 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.422 16:17:12 -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.422 16:17:12 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:03.741 16:17:12 -- event/event.sh@35 -- # sleep 3 00:06:03.741 [2024-04-26 16:17:12.723864] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.028 [2024-04-26 16:17:12.803749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.028 [2024-04-26 16:17:12.803751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.028 [2024-04-26 16:17:12.852056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.028 [2024-04-26 16:17:12.852109] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.616 16:17:15 -- event/event.sh@38 -- # waitforlisten 336477 /var/tmp/spdk-nbd.sock 00:06:06.616 16:17:15 -- common/autotest_common.sh@817 -- # '[' -z 336477 ']' 00:06:06.616 16:17:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.616 16:17:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.616 16:17:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.616 16:17:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.616 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:06:06.875 16:17:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.875 16:17:15 -- common/autotest_common.sh@850 -- # return 0 00:06:06.875 16:17:15 -- event/event.sh@39 -- # killprocess 336477 00:06:06.875 16:17:15 -- common/autotest_common.sh@936 -- # '[' -z 336477 ']' 00:06:06.875 16:17:15 -- common/autotest_common.sh@940 -- # kill -0 336477 00:06:06.875 16:17:15 -- common/autotest_common.sh@941 -- # uname 00:06:06.875 16:17:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:06.875 16:17:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 336477 00:06:06.875 16:17:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:06.875 16:17:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:06.875 16:17:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 336477' 00:06:06.875 killing process with pid 336477 00:06:06.875 16:17:15 -- common/autotest_common.sh@955 -- # kill 336477 00:06:06.875 16:17:15 -- common/autotest_common.sh@960 -- # wait 336477 00:06:07.134 spdk_app_start is called in Round 0. 00:06:07.134 Shutdown signal received, stop current app iteration 00:06:07.134 Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 reinitialization... 00:06:07.134 spdk_app_start is called in Round 1. 00:06:07.134 Shutdown signal received, stop current app iteration 00:06:07.134 Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 reinitialization... 00:06:07.134 spdk_app_start is called in Round 2. 00:06:07.134 Shutdown signal received, stop current app iteration 00:06:07.134 Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 reinitialization... 00:06:07.134 spdk_app_start is called in Round 3. 00:06:07.134 Shutdown signal received, stop current app iteration 00:06:07.134 16:17:15 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:07.134 16:17:15 -- event/event.sh@42 -- # return 0 00:06:07.134 00:06:07.134 real 0m16.397s 00:06:07.134 user 0m34.736s 00:06:07.134 sys 0m3.053s 00:06:07.134 16:17:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.134 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.134 ************************************ 00:06:07.134 END TEST app_repeat 00:06:07.134 ************************************ 00:06:07.134 16:17:15 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:07.134 16:17:15 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.134 16:17:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.134 16:17:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.134 16:17:15 -- common/autotest_common.sh@10 -- # set +x 00:06:07.134 ************************************ 00:06:07.134 START TEST cpu_locks 00:06:07.134 ************************************ 00:06:07.134 16:17:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:07.393 * Looking for test storage... 00:06:07.393 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:07.393 16:17:16 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:07.393 16:17:16 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:07.393 16:17:16 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:07.393 16:17:16 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:07.393 16:17:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.393 16:17:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.393 16:17:16 -- common/autotest_common.sh@10 -- # set +x 00:06:07.393 ************************************ 00:06:07.393 START TEST default_locks 00:06:07.393 ************************************ 00:06:07.393 16:17:16 -- common/autotest_common.sh@1111 -- # default_locks 00:06:07.393 16:17:16 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.393 16:17:16 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=338935 00:06:07.393 16:17:16 -- event/cpu_locks.sh@47 -- # waitforlisten 338935 00:06:07.393 16:17:16 -- common/autotest_common.sh@817 -- # '[' -z 338935 ']' 00:06:07.393 16:17:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.393 16:17:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.393 16:17:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.393 16:17:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.393 16:17:16 -- common/autotest_common.sh@10 -- # set +x 00:06:07.393 [2024-04-26 16:17:16.410244] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:07.393 [2024-04-26 16:17:16.410301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338935 ] 00:06:07.653 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.653 [2024-04-26 16:17:16.479112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.653 [2024-04-26 16:17:16.568077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.222 16:17:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.222 16:17:17 -- common/autotest_common.sh@850 -- # return 0 00:06:08.222 16:17:17 -- event/cpu_locks.sh@49 -- # locks_exist 338935 00:06:08.222 16:17:17 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.222 16:17:17 -- event/cpu_locks.sh@22 -- # lslocks -p 338935 00:06:08.789 lslocks: write error 00:06:08.789 16:17:17 -- event/cpu_locks.sh@50 -- # killprocess 338935 00:06:08.789 16:17:17 -- common/autotest_common.sh@936 -- # '[' -z 338935 ']' 00:06:08.789 16:17:17 -- common/autotest_common.sh@940 -- # kill -0 338935 00:06:08.789 16:17:17 -- common/autotest_common.sh@941 -- # uname 00:06:08.789 16:17:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.789 16:17:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 338935 00:06:08.789 16:17:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.789 16:17:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.789 16:17:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 338935' 00:06:08.789 killing process with pid 338935 00:06:08.789 16:17:17 -- common/autotest_common.sh@955 -- # kill 338935 00:06:08.789 16:17:17 -- common/autotest_common.sh@960 -- # wait 338935 00:06:09.358 16:17:18 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 338935 00:06:09.358 16:17:18 -- common/autotest_common.sh@638 -- # local es=0 00:06:09.358 16:17:18 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 338935 00:06:09.358 16:17:18 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:09.358 16:17:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.358 16:17:18 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:09.358 16:17:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:09.358 16:17:18 -- common/autotest_common.sh@641 -- # waitforlisten 338935 00:06:09.358 16:17:18 -- common/autotest_common.sh@817 -- # '[' -z 338935 ']' 00:06:09.358 16:17:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.358 16:17:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:09.358 16:17:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.358 16:17:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:09.358 16:17:18 -- common/autotest_common.sh@10 -- # set +x 00:06:09.358 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (338935) - No such process 00:06:09.358 ERROR: process (pid: 338935) is no longer running 00:06:09.358 16:17:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.358 16:17:18 -- common/autotest_common.sh@850 -- # return 1 00:06:09.358 16:17:18 -- common/autotest_common.sh@641 -- # es=1 00:06:09.358 16:17:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:09.358 16:17:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:09.358 16:17:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:09.358 16:17:18 -- event/cpu_locks.sh@54 -- # no_locks 00:06:09.358 16:17:18 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:09.358 16:17:18 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:09.358 16:17:18 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:09.358 00:06:09.358 real 0m1.729s 00:06:09.358 user 0m1.779s 00:06:09.358 sys 0m0.586s 00:06:09.358 16:17:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.358 16:17:18 -- common/autotest_common.sh@10 -- # set +x 00:06:09.358 ************************************ 00:06:09.358 END TEST default_locks 00:06:09.358 ************************************ 00:06:09.358 16:17:18 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:09.358 16:17:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.358 16:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.358 16:17:18 -- common/autotest_common.sh@10 -- # set +x 00:06:09.358 ************************************ 00:06:09.358 START TEST default_locks_via_rpc 00:06:09.358 ************************************ 00:06:09.358 16:17:18 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:09.358 16:17:18 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=339321 00:06:09.358 16:17:18 -- event/cpu_locks.sh@63 -- # waitforlisten 339321 00:06:09.358 16:17:18 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.358 16:17:18 -- common/autotest_common.sh@817 -- # '[' -z 339321 ']' 00:06:09.358 16:17:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.358 16:17:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:09.358 16:17:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.358 16:17:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:09.358 16:17:18 -- common/autotest_common.sh@10 -- # set +x 00:06:09.358 [2024-04-26 16:17:18.318491] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:09.358 [2024-04-26 16:17:18.318546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339321 ] 00:06:09.358 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.617 [2024-04-26 16:17:18.390814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.617 [2024-04-26 16:17:18.475717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.183 16:17:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:10.183 16:17:19 -- common/autotest_common.sh@850 -- # return 0 00:06:10.183 16:17:19 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:10.183 16:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:10.183 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.183 16:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:10.183 16:17:19 -- event/cpu_locks.sh@67 -- # no_locks 00:06:10.183 16:17:19 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:10.183 16:17:19 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:10.183 16:17:19 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:10.183 16:17:19 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:10.183 16:17:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:10.183 16:17:19 -- common/autotest_common.sh@10 -- # set +x 00:06:10.183 16:17:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:10.183 16:17:19 -- event/cpu_locks.sh@71 -- # locks_exist 339321 00:06:10.183 16:17:19 -- event/cpu_locks.sh@22 -- # lslocks -p 339321 00:06:10.183 16:17:19 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.152 16:17:19 -- event/cpu_locks.sh@73 -- # killprocess 339321 00:06:11.152 16:17:19 -- common/autotest_common.sh@936 -- # '[' -z 339321 ']' 00:06:11.152 16:17:19 -- common/autotest_common.sh@940 -- # kill -0 339321 00:06:11.152 16:17:19 -- common/autotest_common.sh@941 -- # uname 00:06:11.152 16:17:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.152 16:17:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 339321 00:06:11.152 16:17:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.152 16:17:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.152 16:17:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 339321' 00:06:11.152 killing process with pid 339321 00:06:11.152 16:17:19 -- common/autotest_common.sh@955 -- # kill 339321 00:06:11.152 16:17:19 -- common/autotest_common.sh@960 -- # wait 339321 00:06:11.413 00:06:11.413 real 0m1.956s 00:06:11.413 user 0m2.021s 00:06:11.413 sys 0m0.678s 00:06:11.413 16:17:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:11.413 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:11.413 ************************************ 00:06:11.413 END TEST default_locks_via_rpc 00:06:11.413 ************************************ 00:06:11.413 16:17:20 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:11.413 16:17:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.413 16:17:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.413 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:11.413 ************************************ 00:06:11.413 START TEST non_locking_app_on_locked_coremask 00:06:11.413 ************************************ 00:06:11.413 16:17:20 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:11.413 16:17:20 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=339552 00:06:11.413 16:17:20 -- event/cpu_locks.sh@81 -- # waitforlisten 339552 /var/tmp/spdk.sock 00:06:11.413 16:17:20 -- common/autotest_common.sh@817 -- # '[' -z 339552 ']' 00:06:11.413 16:17:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.413 16:17:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.413 16:17:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.413 16:17:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.413 16:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:11.413 16:17:20 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.413 [2024-04-26 16:17:20.438781] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:11.414 [2024-04-26 16:17:20.438837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339552 ] 00:06:11.672 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.672 [2024-04-26 16:17:20.512909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.672 [2024-04-26 16:17:20.598796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.238 16:17:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.238 16:17:21 -- common/autotest_common.sh@850 -- # return 0 00:06:12.238 16:17:21 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=339734 00:06:12.238 16:17:21 -- event/cpu_locks.sh@85 -- # waitforlisten 339734 /var/tmp/spdk2.sock 00:06:12.238 16:17:21 -- common/autotest_common.sh@817 -- # '[' -z 339734 ']' 00:06:12.238 16:17:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.238 16:17:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:12.238 16:17:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.238 16:17:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:12.238 16:17:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.238 16:17:21 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.498 [2024-04-26 16:17:21.274997] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:12.498 [2024-04-26 16:17:21.275056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339734 ] 00:06:12.498 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.498 [2024-04-26 16:17:21.371554] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.498 [2024-04-26 16:17:21.371577] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.756 [2024-04-26 16:17:21.523890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.325 16:17:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.325 16:17:22 -- common/autotest_common.sh@850 -- # return 0 00:06:13.325 16:17:22 -- event/cpu_locks.sh@87 -- # locks_exist 339552 00:06:13.325 16:17:22 -- event/cpu_locks.sh@22 -- # lslocks -p 339552 00:06:13.325 16:17:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.702 lslocks: write error 00:06:14.702 16:17:23 -- event/cpu_locks.sh@89 -- # killprocess 339552 00:06:14.702 16:17:23 -- common/autotest_common.sh@936 -- # '[' -z 339552 ']' 00:06:14.702 16:17:23 -- common/autotest_common.sh@940 -- # kill -0 339552 00:06:14.702 16:17:23 -- common/autotest_common.sh@941 -- # uname 00:06:14.702 16:17:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:14.702 16:17:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 339552 00:06:14.702 16:17:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:14.702 16:17:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:14.702 16:17:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 339552' 00:06:14.702 killing process with pid 339552 00:06:14.702 16:17:23 -- common/autotest_common.sh@955 -- # kill 339552 00:06:14.702 16:17:23 -- common/autotest_common.sh@960 -- # wait 339552 00:06:15.270 16:17:24 -- event/cpu_locks.sh@90 -- # killprocess 339734 00:06:15.270 16:17:24 -- common/autotest_common.sh@936 -- # '[' -z 339734 ']' 00:06:15.270 16:17:24 -- common/autotest_common.sh@940 -- # kill -0 339734 00:06:15.270 16:17:24 -- common/autotest_common.sh@941 -- # uname 00:06:15.270 16:17:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.270 16:17:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 339734 00:06:15.270 16:17:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.270 16:17:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.270 16:17:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 339734' 00:06:15.270 killing process with pid 339734 00:06:15.270 16:17:24 -- common/autotest_common.sh@955 -- # kill 339734 00:06:15.270 16:17:24 -- common/autotest_common.sh@960 -- # wait 339734 00:06:15.530 00:06:15.530 real 0m4.153s 00:06:15.530 user 0m4.369s 00:06:15.530 sys 0m1.404s 00:06:15.530 16:17:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.530 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.530 ************************************ 00:06:15.530 END TEST non_locking_app_on_locked_coremask 00:06:15.530 ************************************ 00:06:15.789 16:17:24 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.789 16:17:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.789 16:17:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.789 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.789 ************************************ 00:06:15.789 START TEST locking_app_on_unlocked_coremask 00:06:15.789 ************************************ 00:06:15.789 16:17:24 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:15.789 16:17:24 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=340173 00:06:15.789 16:17:24 -- event/cpu_locks.sh@99 -- # waitforlisten 340173 /var/tmp/spdk.sock 00:06:15.789 16:17:24 -- common/autotest_common.sh@817 -- # '[' -z 340173 ']' 00:06:15.790 16:17:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.790 16:17:24 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.790 16:17:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:15.790 16:17:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.790 16:17:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:15.790 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:06:15.790 [2024-04-26 16:17:24.771067] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:15.790 [2024-04-26 16:17:24.771124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340173 ] 00:06:15.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.049 [2024-04-26 16:17:24.842968] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.049 [2024-04-26 16:17:24.842997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.049 [2024-04-26 16:17:24.928225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.617 16:17:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:16.617 16:17:25 -- common/autotest_common.sh@850 -- # return 0 00:06:16.617 16:17:25 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=340324 00:06:16.617 16:17:25 -- event/cpu_locks.sh@103 -- # waitforlisten 340324 /var/tmp/spdk2.sock 00:06:16.617 16:17:25 -- common/autotest_common.sh@817 -- # '[' -z 340324 ']' 00:06:16.617 16:17:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:16.617 16:17:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:16.617 16:17:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:16.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:16.617 16:17:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:16.617 16:17:25 -- common/autotest_common.sh@10 -- # set +x 00:06:16.617 16:17:25 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:16.617 [2024-04-26 16:17:25.598687] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:16.617 [2024-04-26 16:17:25.598746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340324 ] 00:06:16.617 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.876 [2024-04-26 16:17:25.694195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.876 [2024-04-26 16:17:25.846726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.444 16:17:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:17.444 16:17:26 -- common/autotest_common.sh@850 -- # return 0 00:06:17.444 16:17:26 -- event/cpu_locks.sh@105 -- # locks_exist 340324 00:06:17.444 16:17:26 -- event/cpu_locks.sh@22 -- # lslocks -p 340324 00:06:17.444 16:17:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.381 lslocks: write error 00:06:18.381 16:17:27 -- event/cpu_locks.sh@107 -- # killprocess 340173 00:06:18.381 16:17:27 -- common/autotest_common.sh@936 -- # '[' -z 340173 ']' 00:06:18.381 16:17:27 -- common/autotest_common.sh@940 -- # kill -0 340173 00:06:18.381 16:17:27 -- common/autotest_common.sh@941 -- # uname 00:06:18.381 16:17:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.381 16:17:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 340173 00:06:18.381 16:17:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.381 16:17:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.381 16:17:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 340173' 00:06:18.381 killing process with pid 340173 00:06:18.381 16:17:27 -- common/autotest_common.sh@955 -- # kill 340173 00:06:18.381 16:17:27 -- common/autotest_common.sh@960 -- # wait 340173 00:06:18.950 16:17:27 -- event/cpu_locks.sh@108 -- # killprocess 340324 00:06:18.950 16:17:27 -- common/autotest_common.sh@936 -- # '[' -z 340324 ']' 00:06:18.950 16:17:27 -- common/autotest_common.sh@940 -- # kill -0 340324 00:06:18.950 16:17:27 -- common/autotest_common.sh@941 -- # uname 00:06:18.950 16:17:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.950 16:17:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 340324 00:06:18.950 16:17:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.950 16:17:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.950 16:17:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 340324' 00:06:18.950 killing process with pid 340324 00:06:18.950 16:17:27 -- common/autotest_common.sh@955 -- # kill 340324 00:06:18.950 16:17:27 -- common/autotest_common.sh@960 -- # wait 340324 00:06:19.519 00:06:19.519 real 0m3.532s 00:06:19.519 user 0m3.685s 00:06:19.519 sys 0m1.092s 00:06:19.519 16:17:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:19.519 16:17:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.519 ************************************ 00:06:19.519 END TEST locking_app_on_unlocked_coremask 00:06:19.519 ************************************ 00:06:19.519 16:17:28 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:19.519 16:17:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:19.519 16:17:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:19.519 16:17:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.519 ************************************ 00:06:19.519 START TEST locking_app_on_locked_coremask 00:06:19.519 ************************************ 00:06:19.519 16:17:28 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:19.519 16:17:28 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=340736 00:06:19.519 16:17:28 -- event/cpu_locks.sh@116 -- # waitforlisten 340736 /var/tmp/spdk.sock 00:06:19.519 16:17:28 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.519 16:17:28 -- common/autotest_common.sh@817 -- # '[' -z 340736 ']' 00:06:19.519 16:17:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.519 16:17:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.519 16:17:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.519 16:17:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.519 16:17:28 -- common/autotest_common.sh@10 -- # set +x 00:06:19.519 [2024-04-26 16:17:28.515954] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:19.519 [2024-04-26 16:17:28.516008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340736 ] 00:06:19.778 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.778 [2024-04-26 16:17:28.586964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.778 [2024-04-26 16:17:28.669267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.348 16:17:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:20.348 16:17:29 -- common/autotest_common.sh@850 -- # return 0 00:06:20.348 16:17:29 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:20.348 16:17:29 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=340919 00:06:20.348 16:17:29 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 340919 /var/tmp/spdk2.sock 00:06:20.348 16:17:29 -- common/autotest_common.sh@638 -- # local es=0 00:06:20.348 16:17:29 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 340919 /var/tmp/spdk2.sock 00:06:20.348 16:17:29 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:20.348 16:17:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.348 16:17:29 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:20.348 16:17:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:20.348 16:17:29 -- common/autotest_common.sh@641 -- # waitforlisten 340919 /var/tmp/spdk2.sock 00:06:20.348 16:17:29 -- common/autotest_common.sh@817 -- # '[' -z 340919 ']' 00:06:20.348 16:17:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.348 16:17:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:20.348 16:17:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.348 16:17:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:20.348 16:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:20.348 [2024-04-26 16:17:29.361644] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:20.348 [2024-04-26 16:17:29.361696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid340919 ] 00:06:20.607 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.607 [2024-04-26 16:17:29.456012] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 340736 has claimed it. 00:06:20.607 [2024-04-26 16:17:29.456050] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:21.175 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (340919) - No such process 00:06:21.175 ERROR: process (pid: 340919) is no longer running 00:06:21.175 16:17:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:21.175 16:17:29 -- common/autotest_common.sh@850 -- # return 1 00:06:21.175 16:17:29 -- common/autotest_common.sh@641 -- # es=1 00:06:21.175 16:17:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:21.175 16:17:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:21.175 16:17:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:21.175 16:17:29 -- event/cpu_locks.sh@122 -- # locks_exist 340736 00:06:21.175 16:17:29 -- event/cpu_locks.sh@22 -- # lslocks -p 340736 00:06:21.175 16:17:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.743 lslocks: write error 00:06:21.743 16:17:30 -- event/cpu_locks.sh@124 -- # killprocess 340736 00:06:21.743 16:17:30 -- common/autotest_common.sh@936 -- # '[' -z 340736 ']' 00:06:21.743 16:17:30 -- common/autotest_common.sh@940 -- # kill -0 340736 00:06:21.743 16:17:30 -- common/autotest_common.sh@941 -- # uname 00:06:21.743 16:17:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:21.743 16:17:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 340736 00:06:21.743 16:17:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:21.743 16:17:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:21.743 16:17:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 340736' 00:06:21.743 killing process with pid 340736 00:06:21.743 16:17:30 -- common/autotest_common.sh@955 -- # kill 340736 00:06:21.743 16:17:30 -- common/autotest_common.sh@960 -- # wait 340736 00:06:22.012 00:06:22.012 real 0m2.527s 00:06:22.012 user 0m2.745s 00:06:22.012 sys 0m0.772s 00:06:22.012 16:17:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.012 16:17:30 -- common/autotest_common.sh@10 -- # set +x 00:06:22.012 ************************************ 00:06:22.012 END TEST locking_app_on_locked_coremask 00:06:22.012 ************************************ 00:06:22.012 16:17:31 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:22.012 16:17:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:22.012 16:17:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.012 16:17:31 -- common/autotest_common.sh@10 -- # set +x 00:06:22.271 ************************************ 00:06:22.271 START TEST locking_overlapped_coremask 00:06:22.271 ************************************ 00:06:22.271 16:17:31 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:22.271 16:17:31 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=341135 00:06:22.271 16:17:31 -- event/cpu_locks.sh@133 -- # waitforlisten 341135 /var/tmp/spdk.sock 00:06:22.271 16:17:31 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:22.271 16:17:31 -- common/autotest_common.sh@817 -- # '[' -z 341135 ']' 00:06:22.271 16:17:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.271 16:17:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.271 16:17:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.271 16:17:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.271 16:17:31 -- common/autotest_common.sh@10 -- # set +x 00:06:22.271 [2024-04-26 16:17:31.260721] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:22.271 [2024-04-26 16:17:31.260777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341135 ] 00:06:22.271 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.530 [2024-04-26 16:17:31.332944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.530 [2024-04-26 16:17:31.420202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.530 [2024-04-26 16:17:31.420293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.530 [2024-04-26 16:17:31.420295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.098 16:17:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.098 16:17:32 -- common/autotest_common.sh@850 -- # return 0 00:06:23.098 16:17:32 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=341319 00:06:23.098 16:17:32 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 341319 /var/tmp/spdk2.sock 00:06:23.098 16:17:32 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:23.098 16:17:32 -- common/autotest_common.sh@638 -- # local es=0 00:06:23.098 16:17:32 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 341319 /var/tmp/spdk2.sock 00:06:23.098 16:17:32 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:23.098 16:17:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:23.098 16:17:32 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:23.098 16:17:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:23.098 16:17:32 -- common/autotest_common.sh@641 -- # waitforlisten 341319 /var/tmp/spdk2.sock 00:06:23.098 16:17:32 -- common/autotest_common.sh@817 -- # '[' -z 341319 ']' 00:06:23.098 16:17:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:23.098 16:17:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:23.098 16:17:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:23.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:23.098 16:17:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:23.098 16:17:32 -- common/autotest_common.sh@10 -- # set +x 00:06:23.098 [2024-04-26 16:17:32.106725] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:23.098 [2024-04-26 16:17:32.106776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341319 ] 00:06:23.357 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.357 [2024-04-26 16:17:32.206825] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 341135 has claimed it. 00:06:23.357 [2024-04-26 16:17:32.206868] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:23.926 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (341319) - No such process 00:06:23.926 ERROR: process (pid: 341319) is no longer running 00:06:23.926 16:17:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.926 16:17:32 -- common/autotest_common.sh@850 -- # return 1 00:06:23.926 16:17:32 -- common/autotest_common.sh@641 -- # es=1 00:06:23.926 16:17:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:23.926 16:17:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:23.926 16:17:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:23.926 16:17:32 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:23.926 16:17:32 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:23.926 16:17:32 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:23.926 16:17:32 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:23.926 16:17:32 -- event/cpu_locks.sh@141 -- # killprocess 341135 00:06:23.926 16:17:32 -- common/autotest_common.sh@936 -- # '[' -z 341135 ']' 00:06:23.926 16:17:32 -- common/autotest_common.sh@940 -- # kill -0 341135 00:06:23.926 16:17:32 -- common/autotest_common.sh@941 -- # uname 00:06:23.926 16:17:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:23.926 16:17:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 341135 00:06:23.926 16:17:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:23.926 16:17:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:23.926 16:17:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 341135' 00:06:23.926 killing process with pid 341135 00:06:23.926 16:17:32 -- common/autotest_common.sh@955 -- # kill 341135 00:06:23.926 16:17:32 -- common/autotest_common.sh@960 -- # wait 341135 00:06:24.185 00:06:24.185 real 0m1.924s 00:06:24.185 user 0m5.258s 00:06:24.185 sys 0m0.496s 00:06:24.185 16:17:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.185 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.185 ************************************ 00:06:24.185 END TEST locking_overlapped_coremask 00:06:24.185 ************************************ 00:06:24.185 16:17:33 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:24.185 16:17:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.185 16:17:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.185 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.444 ************************************ 00:06:24.445 START TEST locking_overlapped_coremask_via_rpc 00:06:24.445 ************************************ 00:06:24.445 16:17:33 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:24.445 16:17:33 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=341541 00:06:24.445 16:17:33 -- event/cpu_locks.sh@149 -- # waitforlisten 341541 /var/tmp/spdk.sock 00:06:24.445 16:17:33 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:24.445 16:17:33 -- common/autotest_common.sh@817 -- # '[' -z 341541 ']' 00:06:24.445 16:17:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.445 16:17:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.445 16:17:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.445 16:17:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.445 16:17:33 -- common/autotest_common.sh@10 -- # set +x 00:06:24.445 [2024-04-26 16:17:33.398721] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:24.445 [2024-04-26 16:17:33.398778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341541 ] 00:06:24.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.704 [2024-04-26 16:17:33.470401] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.704 [2024-04-26 16:17:33.470430] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:24.704 [2024-04-26 16:17:33.553972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.704 [2024-04-26 16:17:33.554058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:24.704 [2024-04-26 16:17:33.554061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.271 16:17:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:25.271 16:17:34 -- common/autotest_common.sh@850 -- # return 0 00:06:25.271 16:17:34 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:25.271 16:17:34 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=341558 00:06:25.271 16:17:34 -- event/cpu_locks.sh@153 -- # waitforlisten 341558 /var/tmp/spdk2.sock 00:06:25.271 16:17:34 -- common/autotest_common.sh@817 -- # '[' -z 341558 ']' 00:06:25.271 16:17:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:25.271 16:17:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:25.271 16:17:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:25.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:25.271 16:17:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:25.271 16:17:34 -- common/autotest_common.sh@10 -- # set +x 00:06:25.271 [2024-04-26 16:17:34.233529] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:25.271 [2024-04-26 16:17:34.233581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341558 ] 00:06:25.271 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.530 [2024-04-26 16:17:34.337087] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.530 [2024-04-26 16:17:34.337119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.530 [2024-04-26 16:17:34.499731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.530 [2024-04-26 16:17:34.503401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.530 [2024-04-26 16:17:34.503402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:26.099 16:17:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.099 16:17:35 -- common/autotest_common.sh@850 -- # return 0 00:06:26.099 16:17:35 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:26.099 16:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.099 16:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.099 16:17:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:26.099 16:17:35 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.099 16:17:35 -- common/autotest_common.sh@638 -- # local es=0 00:06:26.099 16:17:35 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.099 16:17:35 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:26.099 16:17:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:26.099 16:17:35 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:26.099 16:17:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:26.099 16:17:35 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:26.099 16:17:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:26.099 16:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.099 [2024-04-26 16:17:35.072417] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 341541 has claimed it. 00:06:26.099 request: 00:06:26.099 { 00:06:26.099 "method": "framework_enable_cpumask_locks", 00:06:26.099 "req_id": 1 00:06:26.099 } 00:06:26.099 Got JSON-RPC error response 00:06:26.099 response: 00:06:26.099 { 00:06:26.099 "code": -32603, 00:06:26.099 "message": "Failed to claim CPU core: 2" 00:06:26.099 } 00:06:26.099 16:17:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:26.099 16:17:35 -- common/autotest_common.sh@641 -- # es=1 00:06:26.099 16:17:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:26.099 16:17:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:26.099 16:17:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:26.099 16:17:35 -- event/cpu_locks.sh@158 -- # waitforlisten 341541 /var/tmp/spdk.sock 00:06:26.099 16:17:35 -- common/autotest_common.sh@817 -- # '[' -z 341541 ']' 00:06:26.099 16:17:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.099 16:17:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.099 16:17:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.099 16:17:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.099 16:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.358 16:17:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.358 16:17:35 -- common/autotest_common.sh@850 -- # return 0 00:06:26.358 16:17:35 -- event/cpu_locks.sh@159 -- # waitforlisten 341558 /var/tmp/spdk2.sock 00:06:26.358 16:17:35 -- common/autotest_common.sh@817 -- # '[' -z 341558 ']' 00:06:26.358 16:17:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.358 16:17:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.358 16:17:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.358 16:17:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.358 16:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.618 16:17:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.618 16:17:35 -- common/autotest_common.sh@850 -- # return 0 00:06:26.618 16:17:35 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:26.618 16:17:35 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:26.618 16:17:35 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:26.618 16:17:35 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:26.618 00:06:26.618 real 0m2.120s 00:06:26.618 user 0m0.843s 00:06:26.618 sys 0m0.212s 00:06:26.618 16:17:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.618 16:17:35 -- common/autotest_common.sh@10 -- # set +x 00:06:26.618 ************************************ 00:06:26.618 END TEST locking_overlapped_coremask_via_rpc 00:06:26.618 ************************************ 00:06:26.618 16:17:35 -- event/cpu_locks.sh@174 -- # cleanup 00:06:26.618 16:17:35 -- event/cpu_locks.sh@15 -- # [[ -z 341541 ]] 00:06:26.618 16:17:35 -- event/cpu_locks.sh@15 -- # killprocess 341541 00:06:26.618 16:17:35 -- common/autotest_common.sh@936 -- # '[' -z 341541 ']' 00:06:26.618 16:17:35 -- common/autotest_common.sh@940 -- # kill -0 341541 00:06:26.618 16:17:35 -- common/autotest_common.sh@941 -- # uname 00:06:26.618 16:17:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:26.618 16:17:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 341541 00:06:26.618 16:17:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:26.618 16:17:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:26.618 16:17:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 341541' 00:06:26.618 killing process with pid 341541 00:06:26.618 16:17:35 -- common/autotest_common.sh@955 -- # kill 341541 00:06:26.618 16:17:35 -- common/autotest_common.sh@960 -- # wait 341541 00:06:27.187 16:17:35 -- event/cpu_locks.sh@16 -- # [[ -z 341558 ]] 00:06:27.187 16:17:35 -- event/cpu_locks.sh@16 -- # killprocess 341558 00:06:27.187 16:17:35 -- common/autotest_common.sh@936 -- # '[' -z 341558 ']' 00:06:27.187 16:17:35 -- common/autotest_common.sh@940 -- # kill -0 341558 00:06:27.187 16:17:35 -- common/autotest_common.sh@941 -- # uname 00:06:27.187 16:17:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.187 16:17:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 341558 00:06:27.187 16:17:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:27.187 16:17:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:27.187 16:17:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 341558' 00:06:27.187 killing process with pid 341558 00:06:27.187 16:17:35 -- common/autotest_common.sh@955 -- # kill 341558 00:06:27.187 16:17:35 -- common/autotest_common.sh@960 -- # wait 341558 00:06:27.446 16:17:36 -- event/cpu_locks.sh@18 -- # rm -f 00:06:27.446 16:17:36 -- event/cpu_locks.sh@1 -- # cleanup 00:06:27.446 16:17:36 -- event/cpu_locks.sh@15 -- # [[ -z 341541 ]] 00:06:27.446 16:17:36 -- event/cpu_locks.sh@15 -- # killprocess 341541 00:06:27.446 16:17:36 -- common/autotest_common.sh@936 -- # '[' -z 341541 ']' 00:06:27.446 16:17:36 -- common/autotest_common.sh@940 -- # kill -0 341541 00:06:27.446 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (341541) - No such process 00:06:27.446 16:17:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 341541 is not found' 00:06:27.446 Process with pid 341541 is not found 00:06:27.446 16:17:36 -- event/cpu_locks.sh@16 -- # [[ -z 341558 ]] 00:06:27.446 16:17:36 -- event/cpu_locks.sh@16 -- # killprocess 341558 00:06:27.446 16:17:36 -- common/autotest_common.sh@936 -- # '[' -z 341558 ']' 00:06:27.446 16:17:36 -- common/autotest_common.sh@940 -- # kill -0 341558 00:06:27.446 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (341558) - No such process 00:06:27.446 16:17:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 341558 is not found' 00:06:27.446 Process with pid 341558 is not found 00:06:27.446 16:17:36 -- event/cpu_locks.sh@18 -- # rm -f 00:06:27.446 00:06:27.446 real 0m20.254s 00:06:27.446 user 0m31.932s 00:06:27.446 sys 0m6.674s 00:06:27.446 16:17:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.446 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.446 ************************************ 00:06:27.446 END TEST cpu_locks 00:06:27.446 ************************************ 00:06:27.446 00:06:27.446 real 0m46.694s 00:06:27.446 user 1m23.218s 00:06:27.446 sys 0m11.264s 00:06:27.446 16:17:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.446 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.446 ************************************ 00:06:27.446 END TEST event 00:06:27.446 ************************************ 00:06:27.446 16:17:36 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:27.446 16:17:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.446 16:17:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.446 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.706 ************************************ 00:06:27.706 START TEST thread 00:06:27.706 ************************************ 00:06:27.706 16:17:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:27.965 * Looking for test storage... 00:06:27.965 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:27.965 16:17:36 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:27.965 16:17:36 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:27.965 16:17:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.965 16:17:36 -- common/autotest_common.sh@10 -- # set +x 00:06:27.965 ************************************ 00:06:27.965 START TEST thread_poller_perf 00:06:27.965 ************************************ 00:06:27.965 16:17:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:27.965 [2024-04-26 16:17:36.923243] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:27.965 [2024-04-26 16:17:36.923324] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342035 ] 00:06:27.965 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.224 [2024-04-26 16:17:36.998683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.224 [2024-04-26 16:17:37.078932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.224 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:29.162 ====================================== 00:06:29.162 busy:2307265880 (cyc) 00:06:29.162 total_run_count: 424000 00:06:29.162 tsc_hz: 2300000000 (cyc) 00:06:29.162 ====================================== 00:06:29.162 poller_cost: 5441 (cyc), 2365 (nsec) 00:06:29.162 00:06:29.162 real 0m1.274s 00:06:29.162 user 0m1.175s 00:06:29.162 sys 0m0.095s 00:06:29.162 16:17:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.162 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:29.162 ************************************ 00:06:29.162 END TEST thread_poller_perf 00:06:29.162 ************************************ 00:06:29.421 16:17:38 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.421 16:17:38 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:29.421 16:17:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.421 16:17:38 -- common/autotest_common.sh@10 -- # set +x 00:06:29.421 ************************************ 00:06:29.421 START TEST thread_poller_perf 00:06:29.421 ************************************ 00:06:29.421 16:17:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.421 [2024-04-26 16:17:38.398177] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:29.421 [2024-04-26 16:17:38.398263] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342256 ] 00:06:29.421 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.680 [2024-04-26 16:17:38.472808] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.680 [2024-04-26 16:17:38.555061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.680 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:31.056 ====================================== 00:06:31.056 busy:2301696496 (cyc) 00:06:31.056 total_run_count: 5572000 00:06:31.056 tsc_hz: 2300000000 (cyc) 00:06:31.056 ====================================== 00:06:31.056 poller_cost: 413 (cyc), 179 (nsec) 00:06:31.056 00:06:31.056 real 0m1.271s 00:06:31.056 user 0m1.166s 00:06:31.056 sys 0m0.101s 00:06:31.056 16:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.056 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.056 ************************************ 00:06:31.056 END TEST thread_poller_perf 00:06:31.056 ************************************ 00:06:31.056 16:17:39 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:31.056 00:06:31.056 real 0m3.044s 00:06:31.056 user 0m2.498s 00:06:31.056 sys 0m0.511s 00:06:31.056 16:17:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.056 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.056 ************************************ 00:06:31.056 END TEST thread 00:06:31.056 ************************************ 00:06:31.056 16:17:39 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:31.056 16:17:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:31.056 16:17:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.056 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.056 ************************************ 00:06:31.056 START TEST accel 00:06:31.056 ************************************ 00:06:31.056 16:17:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:31.056 * Looking for test storage... 00:06:31.056 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:31.056 16:17:39 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:31.056 16:17:39 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:31.056 16:17:39 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:31.056 16:17:39 -- accel/accel.sh@62 -- # spdk_tgt_pid=342632 00:06:31.056 16:17:39 -- accel/accel.sh@63 -- # waitforlisten 342632 00:06:31.056 16:17:39 -- common/autotest_common.sh@817 -- # '[' -z 342632 ']' 00:06:31.056 16:17:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.056 16:17:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:31.056 16:17:39 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:31.056 16:17:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.056 16:17:39 -- accel/accel.sh@61 -- # build_accel_config 00:06:31.056 16:17:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:31.056 16:17:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.056 16:17:39 -- common/autotest_common.sh@10 -- # set +x 00:06:31.056 16:17:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.056 16:17:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.056 16:17:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.056 16:17:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.056 16:17:39 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.056 16:17:39 -- accel/accel.sh@41 -- # jq -r . 00:06:31.056 [2024-04-26 16:17:40.045921] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:31.056 [2024-04-26 16:17:40.045981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342632 ] 00:06:31.056 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.316 [2024-04-26 16:17:40.118197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.316 [2024-04-26 16:17:40.201320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.885 16:17:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:31.885 16:17:40 -- common/autotest_common.sh@850 -- # return 0 00:06:31.885 16:17:40 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:31.885 16:17:40 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:31.885 16:17:40 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:31.885 16:17:40 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:31.885 16:17:40 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:31.885 16:17:40 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:31.885 16:17:40 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:31.885 16:17:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.885 16:17:40 -- common/autotest_common.sh@10 -- # set +x 00:06:31.885 16:17:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.885 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:31.885 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.885 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:31.885 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.885 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:31.885 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.885 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:31.885 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.885 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:31.885 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.885 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:31.885 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:31.885 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:31.885 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:32.146 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.146 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:32.146 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.146 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:32.146 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.146 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:32.146 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.146 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:32.146 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.146 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:32.146 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.146 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:32.146 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.146 16:17:40 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # IFS== 00:06:32.146 16:17:40 -- accel/accel.sh@72 -- # read -r opc module 00:06:32.146 16:17:40 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:32.146 16:17:40 -- accel/accel.sh@75 -- # killprocess 342632 00:06:32.146 16:17:40 -- common/autotest_common.sh@936 -- # '[' -z 342632 ']' 00:06:32.146 16:17:40 -- common/autotest_common.sh@940 -- # kill -0 342632 00:06:32.146 16:17:40 -- common/autotest_common.sh@941 -- # uname 00:06:32.146 16:17:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.146 16:17:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 342632 00:06:32.146 16:17:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.146 16:17:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.146 16:17:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 342632' 00:06:32.146 killing process with pid 342632 00:06:32.146 16:17:40 -- common/autotest_common.sh@955 -- # kill 342632 00:06:32.146 16:17:40 -- common/autotest_common.sh@960 -- # wait 342632 00:06:32.405 16:17:41 -- accel/accel.sh@76 -- # trap - ERR 00:06:32.405 16:17:41 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:32.405 16:17:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:32.405 16:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.405 16:17:41 -- common/autotest_common.sh@10 -- # set +x 00:06:32.664 16:17:41 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:32.664 16:17:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:32.664 16:17:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.664 16:17:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.664 16:17:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.664 16:17:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.664 16:17:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.664 16:17:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.664 16:17:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.664 16:17:41 -- accel/accel.sh@41 -- # jq -r . 00:06:32.664 16:17:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.664 16:17:41 -- common/autotest_common.sh@10 -- # set +x 00:06:32.664 16:17:41 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:32.664 16:17:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:32.664 16:17:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.664 16:17:41 -- common/autotest_common.sh@10 -- # set +x 00:06:32.923 ************************************ 00:06:32.923 START TEST accel_missing_filename 00:06:32.923 ************************************ 00:06:32.923 16:17:41 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:32.923 16:17:41 -- common/autotest_common.sh@638 -- # local es=0 00:06:32.923 16:17:41 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:32.923 16:17:41 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:32.923 16:17:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.924 16:17:41 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:32.924 16:17:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:32.924 16:17:41 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:32.924 16:17:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:32.924 16:17:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.924 16:17:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.924 16:17:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.924 16:17:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.924 16:17:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.924 16:17:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.924 16:17:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:32.924 16:17:41 -- accel/accel.sh@41 -- # jq -r . 00:06:32.924 [2024-04-26 16:17:41.740758] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:32.924 [2024-04-26 16:17:41.740819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342899 ] 00:06:32.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.924 [2024-04-26 16:17:41.815629] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.924 [2024-04-26 16:17:41.896277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.924 [2024-04-26 16:17:41.944130] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.183 [2024-04-26 16:17:42.013854] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:33.183 A filename is required. 00:06:33.183 16:17:42 -- common/autotest_common.sh@641 -- # es=234 00:06:33.183 16:17:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:33.183 16:17:42 -- common/autotest_common.sh@650 -- # es=106 00:06:33.183 16:17:42 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:33.183 16:17:42 -- common/autotest_common.sh@658 -- # es=1 00:06:33.183 16:17:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:33.183 00:06:33.183 real 0m0.394s 00:06:33.183 user 0m0.290s 00:06:33.183 sys 0m0.142s 00:06:33.183 16:17:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.183 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:33.183 ************************************ 00:06:33.183 END TEST accel_missing_filename 00:06:33.183 ************************************ 00:06:33.183 16:17:42 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:33.183 16:17:42 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:33.183 16:17:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.183 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:33.442 ************************************ 00:06:33.442 START TEST accel_compress_verify 00:06:33.442 ************************************ 00:06:33.442 16:17:42 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:33.442 16:17:42 -- common/autotest_common.sh@638 -- # local es=0 00:06:33.442 16:17:42 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:33.442 16:17:42 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:33.442 16:17:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:33.442 16:17:42 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:33.442 16:17:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:33.442 16:17:42 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:33.442 16:17:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:33.442 16:17:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.442 16:17:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.442 16:17:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.442 16:17:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.442 16:17:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.442 16:17:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.442 16:17:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.442 16:17:42 -- accel/accel.sh@41 -- # jq -r . 00:06:33.442 [2024-04-26 16:17:42.313751] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:33.442 [2024-04-26 16:17:42.313812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid342931 ] 00:06:33.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.442 [2024-04-26 16:17:42.386472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.702 [2024-04-26 16:17:42.471952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.702 [2024-04-26 16:17:42.521044] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.702 [2024-04-26 16:17:42.587355] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:33.702 00:06:33.702 Compression does not support the verify option, aborting. 00:06:33.702 16:17:42 -- common/autotest_common.sh@641 -- # es=161 00:06:33.702 16:17:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:33.702 16:17:42 -- common/autotest_common.sh@650 -- # es=33 00:06:33.702 16:17:42 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:33.702 16:17:42 -- common/autotest_common.sh@658 -- # es=1 00:06:33.702 16:17:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:33.702 00:06:33.702 real 0m0.393s 00:06:33.702 user 0m0.290s 00:06:33.702 sys 0m0.141s 00:06:33.702 16:17:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.702 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:33.702 ************************************ 00:06:33.702 END TEST accel_compress_verify 00:06:33.702 ************************************ 00:06:33.702 16:17:42 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:33.702 16:17:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:33.702 16:17:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.702 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:33.962 ************************************ 00:06:33.962 START TEST accel_wrong_workload 00:06:33.962 ************************************ 00:06:33.962 16:17:42 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:33.962 16:17:42 -- common/autotest_common.sh@638 -- # local es=0 00:06:33.962 16:17:42 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:33.962 16:17:42 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:33.962 16:17:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:33.962 16:17:42 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:33.962 16:17:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:33.962 16:17:42 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:33.962 16:17:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:33.962 16:17:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.962 16:17:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.962 16:17:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.962 16:17:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.962 16:17:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.962 16:17:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.962 16:17:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.962 16:17:42 -- accel/accel.sh@41 -- # jq -r . 00:06:33.962 Unsupported workload type: foobar 00:06:33.962 [2024-04-26 16:17:42.911727] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:33.962 accel_perf options: 00:06:33.962 [-h help message] 00:06:33.962 [-q queue depth per core] 00:06:33.962 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:33.962 [-T number of threads per core 00:06:33.962 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:33.962 [-t time in seconds] 00:06:33.962 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:33.962 [ dif_verify, , dif_generate, dif_generate_copy 00:06:33.962 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:33.962 [-l for compress/decompress workloads, name of uncompressed input file 00:06:33.962 [-S for crc32c workload, use this seed value (default 0) 00:06:33.962 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:33.962 [-f for fill workload, use this BYTE value (default 255) 00:06:33.962 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:33.962 [-y verify result if this switch is on] 00:06:33.962 [-a tasks to allocate per core (default: same value as -q)] 00:06:33.962 Can be used to spread operations across a wider range of memory. 00:06:33.962 16:17:42 -- common/autotest_common.sh@641 -- # es=1 00:06:33.962 16:17:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:33.962 16:17:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:33.962 16:17:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:33.962 00:06:33.962 real 0m0.039s 00:06:33.962 user 0m0.020s 00:06:33.962 sys 0m0.019s 00:06:33.962 16:17:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.962 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:33.962 ************************************ 00:06:33.962 END TEST accel_wrong_workload 00:06:33.962 ************************************ 00:06:33.962 Error: writing output failed: Broken pipe 00:06:33.962 16:17:42 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:33.962 16:17:42 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:33.962 16:17:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.962 16:17:42 -- common/autotest_common.sh@10 -- # set +x 00:06:34.222 ************************************ 00:06:34.222 START TEST accel_negative_buffers 00:06:34.222 ************************************ 00:06:34.222 16:17:43 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:34.222 16:17:43 -- common/autotest_common.sh@638 -- # local es=0 00:06:34.222 16:17:43 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:34.222 16:17:43 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:34.222 16:17:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.222 16:17:43 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:34.222 16:17:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:34.222 16:17:43 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:34.222 16:17:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:34.222 16:17:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.222 16:17:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.222 16:17:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.222 16:17:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.222 16:17:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.222 16:17:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.222 16:17:43 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.222 16:17:43 -- accel/accel.sh@41 -- # jq -r . 00:06:34.222 -x option must be non-negative. 00:06:34.222 [2024-04-26 16:17:43.150784] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:34.222 accel_perf options: 00:06:34.222 [-h help message] 00:06:34.222 [-q queue depth per core] 00:06:34.222 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:34.222 [-T number of threads per core 00:06:34.222 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:34.222 [-t time in seconds] 00:06:34.222 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:34.222 [ dif_verify, , dif_generate, dif_generate_copy 00:06:34.222 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:34.222 [-l for compress/decompress workloads, name of uncompressed input file 00:06:34.222 [-S for crc32c workload, use this seed value (default 0) 00:06:34.222 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:34.223 [-f for fill workload, use this BYTE value (default 255) 00:06:34.223 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:34.223 [-y verify result if this switch is on] 00:06:34.223 [-a tasks to allocate per core (default: same value as -q)] 00:06:34.223 Can be used to spread operations across a wider range of memory. 00:06:34.223 16:17:43 -- common/autotest_common.sh@641 -- # es=1 00:06:34.223 16:17:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:34.223 16:17:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:34.223 16:17:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:34.223 00:06:34.223 real 0m0.038s 00:06:34.223 user 0m0.020s 00:06:34.223 sys 0m0.018s 00:06:34.223 16:17:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.223 16:17:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.223 ************************************ 00:06:34.223 END TEST accel_negative_buffers 00:06:34.223 ************************************ 00:06:34.223 Error: writing output failed: Broken pipe 00:06:34.223 16:17:43 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:34.223 16:17:43 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:34.223 16:17:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.223 16:17:43 -- common/autotest_common.sh@10 -- # set +x 00:06:34.481 ************************************ 00:06:34.481 START TEST accel_crc32c 00:06:34.481 ************************************ 00:06:34.481 16:17:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:34.481 16:17:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.481 16:17:43 -- accel/accel.sh@17 -- # local accel_module 00:06:34.481 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.481 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.481 16:17:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:34.481 16:17:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:34.481 16:17:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.481 16:17:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.481 16:17:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.481 16:17:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.481 16:17:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.481 16:17:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.481 16:17:43 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.481 16:17:43 -- accel/accel.sh@41 -- # jq -r . 00:06:34.481 [2024-04-26 16:17:43.379638] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:34.481 [2024-04-26 16:17:43.379707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343186 ] 00:06:34.481 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.481 [2024-04-26 16:17:43.453335] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.740 [2024-04-26 16:17:43.537314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val= 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val= 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val=0x1 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val= 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val= 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val=crc32c 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val=32 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.740 16:17:43 -- accel/accel.sh@20 -- # val= 00:06:34.740 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.740 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.741 16:17:43 -- accel/accel.sh@20 -- # val=software 00:06:34.741 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.741 16:17:43 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.741 16:17:43 -- accel/accel.sh@20 -- # val=32 00:06:34.741 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.741 16:17:43 -- accel/accel.sh@20 -- # val=32 00:06:34.741 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.741 16:17:43 -- accel/accel.sh@20 -- # val=1 00:06:34.741 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.741 16:17:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.741 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.741 16:17:43 -- accel/accel.sh@20 -- # val=Yes 00:06:34.741 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.741 16:17:43 -- accel/accel.sh@20 -- # val= 00:06:34.741 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:34.741 16:17:43 -- accel/accel.sh@20 -- # val= 00:06:34.741 16:17:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # IFS=: 00:06:34.741 16:17:43 -- accel/accel.sh@19 -- # read -r var val 00:06:36.116 16:17:44 -- accel/accel.sh@20 -- # val= 00:06:36.116 16:17:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # IFS=: 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # read -r var val 00:06:36.116 16:17:44 -- accel/accel.sh@20 -- # val= 00:06:36.116 16:17:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # IFS=: 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # read -r var val 00:06:36.116 16:17:44 -- accel/accel.sh@20 -- # val= 00:06:36.116 16:17:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # IFS=: 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # read -r var val 00:06:36.116 16:17:44 -- accel/accel.sh@20 -- # val= 00:06:36.116 16:17:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # IFS=: 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # read -r var val 00:06:36.116 16:17:44 -- accel/accel.sh@20 -- # val= 00:06:36.116 16:17:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.116 16:17:44 -- accel/accel.sh@19 -- # IFS=: 00:06:36.117 16:17:44 -- accel/accel.sh@19 -- # read -r var val 00:06:36.117 16:17:44 -- accel/accel.sh@20 -- # val= 00:06:36.117 16:17:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.117 16:17:44 -- accel/accel.sh@19 -- # IFS=: 00:06:36.117 16:17:44 -- accel/accel.sh@19 -- # read -r var val 00:06:36.117 16:17:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.117 16:17:44 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:36.117 16:17:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.117 00:06:36.117 real 0m1.400s 00:06:36.117 user 0m1.255s 00:06:36.117 sys 0m0.150s 00:06:36.117 16:17:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.117 16:17:44 -- common/autotest_common.sh@10 -- # set +x 00:06:36.117 ************************************ 00:06:36.117 END TEST accel_crc32c 00:06:36.117 ************************************ 00:06:36.117 16:17:44 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:36.117 16:17:44 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:36.117 16:17:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.117 16:17:44 -- common/autotest_common.sh@10 -- # set +x 00:06:36.117 ************************************ 00:06:36.117 START TEST accel_crc32c_C2 00:06:36.117 ************************************ 00:06:36.117 16:17:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:36.117 16:17:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.117 16:17:44 -- accel/accel.sh@17 -- # local accel_module 00:06:36.117 16:17:44 -- accel/accel.sh@19 -- # IFS=: 00:06:36.117 16:17:44 -- accel/accel.sh@19 -- # read -r var val 00:06:36.117 16:17:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:36.117 16:17:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:36.117 16:17:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.117 16:17:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.117 16:17:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.117 16:17:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.117 16:17:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.117 16:17:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.117 16:17:44 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.117 16:17:44 -- accel/accel.sh@41 -- # jq -r . 00:06:36.117 [2024-04-26 16:17:44.988957] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:36.117 [2024-04-26 16:17:44.989019] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343401 ] 00:06:36.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.117 [2024-04-26 16:17:45.060801] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.117 [2024-04-26 16:17:45.138758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val= 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val= 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val=0x1 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val= 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val= 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val=crc32c 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val=0 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val= 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val=software 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val=32 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val=32 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val=1 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val=Yes 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val= 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:36.375 16:17:45 -- accel/accel.sh@20 -- # val= 00:06:36.375 16:17:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # IFS=: 00:06:36.375 16:17:45 -- accel/accel.sh@19 -- # read -r var val 00:06:37.312 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.312 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.312 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.312 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.312 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.312 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.312 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.312 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.312 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.312 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.312 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.312 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.312 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.312 16:17:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.312 16:17:46 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:37.312 16:17:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.312 00:06:37.312 real 0m1.377s 00:06:37.312 user 0m1.251s 00:06:37.312 sys 0m0.130s 00:06:37.312 16:17:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.312 16:17:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.312 ************************************ 00:06:37.312 END TEST accel_crc32c_C2 00:06:37.312 ************************************ 00:06:37.570 16:17:46 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:37.570 16:17:46 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:37.570 16:17:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.570 16:17:46 -- common/autotest_common.sh@10 -- # set +x 00:06:37.570 ************************************ 00:06:37.570 START TEST accel_copy 00:06:37.570 ************************************ 00:06:37.570 16:17:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:37.570 16:17:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.570 16:17:46 -- accel/accel.sh@17 -- # local accel_module 00:06:37.570 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.571 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.571 16:17:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:37.571 16:17:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:37.571 16:17:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.571 16:17:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.571 16:17:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.571 16:17:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.571 16:17:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.571 16:17:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.571 16:17:46 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.571 16:17:46 -- accel/accel.sh@41 -- # jq -r . 00:06:37.571 [2024-04-26 16:17:46.550607] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:37.571 [2024-04-26 16:17:46.550679] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343622 ] 00:06:37.571 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.830 [2024-04-26 16:17:46.625738] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.830 [2024-04-26 16:17:46.703747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val=0x1 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val=copy 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val=software 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val=32 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val=32 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val=1 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val=Yes 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:37.830 16:17:46 -- accel/accel.sh@20 -- # val= 00:06:37.830 16:17:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # IFS=: 00:06:37.830 16:17:46 -- accel/accel.sh@19 -- # read -r var val 00:06:39.210 16:17:47 -- accel/accel.sh@20 -- # val= 00:06:39.210 16:17:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # IFS=: 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # read -r var val 00:06:39.210 16:17:47 -- accel/accel.sh@20 -- # val= 00:06:39.210 16:17:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # IFS=: 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # read -r var val 00:06:39.210 16:17:47 -- accel/accel.sh@20 -- # val= 00:06:39.210 16:17:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # IFS=: 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # read -r var val 00:06:39.210 16:17:47 -- accel/accel.sh@20 -- # val= 00:06:39.210 16:17:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # IFS=: 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # read -r var val 00:06:39.210 16:17:47 -- accel/accel.sh@20 -- # val= 00:06:39.210 16:17:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # IFS=: 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # read -r var val 00:06:39.210 16:17:47 -- accel/accel.sh@20 -- # val= 00:06:39.210 16:17:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # IFS=: 00:06:39.210 16:17:47 -- accel/accel.sh@19 -- # read -r var val 00:06:39.210 16:17:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.210 16:17:47 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:39.210 16:17:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.210 00:06:39.210 real 0m1.398s 00:06:39.210 user 0m1.264s 00:06:39.210 sys 0m0.138s 00:06:39.210 16:17:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.210 16:17:47 -- common/autotest_common.sh@10 -- # set +x 00:06:39.210 ************************************ 00:06:39.210 END TEST accel_copy 00:06:39.210 ************************************ 00:06:39.210 16:17:47 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.210 16:17:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:39.210 16:17:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.210 16:17:47 -- common/autotest_common.sh@10 -- # set +x 00:06:39.210 ************************************ 00:06:39.210 START TEST accel_fill 00:06:39.210 ************************************ 00:06:39.211 16:17:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.211 16:17:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.211 16:17:48 -- accel/accel.sh@17 -- # local accel_module 00:06:39.211 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.211 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.211 16:17:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.211 16:17:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:39.211 16:17:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.211 16:17:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.211 16:17:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.211 16:17:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.211 16:17:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.211 16:17:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.211 16:17:48 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.211 16:17:48 -- accel/accel.sh@41 -- # jq -r . 00:06:39.211 [2024-04-26 16:17:48.143962] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:39.211 [2024-04-26 16:17:48.144035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid343943 ] 00:06:39.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.211 [2024-04-26 16:17:48.216473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.470 [2024-04-26 16:17:48.297510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val= 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val= 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val=0x1 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val= 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val= 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val=fill 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val=0x80 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val= 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val=software 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val=64 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val=64 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val=1 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val=Yes 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val= 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:39.470 16:17:48 -- accel/accel.sh@20 -- # val= 00:06:39.470 16:17:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # IFS=: 00:06:39.470 16:17:48 -- accel/accel.sh@19 -- # read -r var val 00:06:40.848 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:40.848 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:40.848 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:40.848 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:40.848 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:40.848 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:40.848 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:40.848 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:40.848 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:40.848 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:40.848 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:40.848 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:40.848 16:17:49 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.848 16:17:49 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:40.848 16:17:49 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.848 00:06:40.848 real 0m1.399s 00:06:40.848 user 0m1.256s 00:06:40.848 sys 0m0.147s 00:06:40.848 16:17:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.848 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:40.848 ************************************ 00:06:40.848 END TEST accel_fill 00:06:40.848 ************************************ 00:06:40.848 16:17:49 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:40.848 16:17:49 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:40.848 16:17:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.848 16:17:49 -- common/autotest_common.sh@10 -- # set +x 00:06:40.848 ************************************ 00:06:40.848 START TEST accel_copy_crc32c 00:06:40.848 ************************************ 00:06:40.848 16:17:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:40.848 16:17:49 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.848 16:17:49 -- accel/accel.sh@17 -- # local accel_module 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:40.848 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:40.848 16:17:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:40.848 16:17:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:40.848 16:17:49 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.848 16:17:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.848 16:17:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.848 16:17:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.848 16:17:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.848 16:17:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.848 16:17:49 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.848 16:17:49 -- accel/accel.sh@41 -- # jq -r . 00:06:40.848 [2024-04-26 16:17:49.719819] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:40.848 [2024-04-26 16:17:49.719877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344195 ] 00:06:40.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.848 [2024-04-26 16:17:49.793289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.848 [2024-04-26 16:17:49.870413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val=0x1 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val=0 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val=software 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val=32 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val=32 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val=1 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val=Yes 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:41.108 16:17:49 -- accel/accel.sh@20 -- # val= 00:06:41.108 16:17:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # IFS=: 00:06:41.108 16:17:49 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.484 16:17:51 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:42.484 16:17:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.484 00:06:42.484 real 0m1.385s 00:06:42.484 user 0m1.252s 00:06:42.484 sys 0m0.137s 00:06:42.484 16:17:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:42.484 16:17:51 -- common/autotest_common.sh@10 -- # set +x 00:06:42.484 ************************************ 00:06:42.484 END TEST accel_copy_crc32c 00:06:42.484 ************************************ 00:06:42.484 16:17:51 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:42.484 16:17:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:42.484 16:17:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.484 16:17:51 -- common/autotest_common.sh@10 -- # set +x 00:06:42.484 ************************************ 00:06:42.484 START TEST accel_copy_crc32c_C2 00:06:42.484 ************************************ 00:06:42.484 16:17:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:42.484 16:17:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.484 16:17:51 -- accel/accel.sh@17 -- # local accel_module 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:42.484 16:17:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:42.484 16:17:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.484 16:17:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.484 16:17:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.484 16:17:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.484 16:17:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.484 16:17:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.484 16:17:51 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.484 16:17:51 -- accel/accel.sh@41 -- # jq -r . 00:06:42.484 [2024-04-26 16:17:51.291810] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:42.484 [2024-04-26 16:17:51.291886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344407 ] 00:06:42.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.484 [2024-04-26 16:17:51.364683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.484 [2024-04-26 16:17:51.445060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val=0x1 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val=0 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val=software 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val=32 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val=32 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val=1 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val=Yes 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:42.484 16:17:51 -- accel/accel.sh@20 -- # val= 00:06:42.484 16:17:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # IFS=: 00:06:42.484 16:17:51 -- accel/accel.sh@19 -- # read -r var val 00:06:43.860 16:17:52 -- accel/accel.sh@20 -- # val= 00:06:43.860 16:17:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # IFS=: 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # read -r var val 00:06:43.860 16:17:52 -- accel/accel.sh@20 -- # val= 00:06:43.860 16:17:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # IFS=: 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # read -r var val 00:06:43.860 16:17:52 -- accel/accel.sh@20 -- # val= 00:06:43.860 16:17:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # IFS=: 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # read -r var val 00:06:43.860 16:17:52 -- accel/accel.sh@20 -- # val= 00:06:43.860 16:17:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # IFS=: 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # read -r var val 00:06:43.860 16:17:52 -- accel/accel.sh@20 -- # val= 00:06:43.860 16:17:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # IFS=: 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # read -r var val 00:06:43.860 16:17:52 -- accel/accel.sh@20 -- # val= 00:06:43.860 16:17:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # IFS=: 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # read -r var val 00:06:43.860 16:17:52 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.860 16:17:52 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:43.860 16:17:52 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.860 00:06:43.860 real 0m1.394s 00:06:43.860 user 0m1.251s 00:06:43.860 sys 0m0.148s 00:06:43.860 16:17:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.860 16:17:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.860 ************************************ 00:06:43.860 END TEST accel_copy_crc32c_C2 00:06:43.860 ************************************ 00:06:43.860 16:17:52 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:43.860 16:17:52 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:43.860 16:17:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.860 16:17:52 -- common/autotest_common.sh@10 -- # set +x 00:06:43.860 ************************************ 00:06:43.860 START TEST accel_dualcast 00:06:43.860 ************************************ 00:06:43.860 16:17:52 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:43.860 16:17:52 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.860 16:17:52 -- accel/accel.sh@17 -- # local accel_module 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # IFS=: 00:06:43.860 16:17:52 -- accel/accel.sh@19 -- # read -r var val 00:06:43.860 16:17:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:43.860 16:17:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.860 16:17:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:43.860 16:17:52 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.860 16:17:52 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.860 16:17:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.860 16:17:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.860 16:17:52 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.860 16:17:52 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.860 16:17:52 -- accel/accel.sh@41 -- # jq -r . 00:06:43.860 [2024-04-26 16:17:52.860028] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:43.860 [2024-04-26 16:17:52.860091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344616 ] 00:06:44.120 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.120 [2024-04-26 16:17:52.934893] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.120 [2024-04-26 16:17:53.013945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val= 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val= 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val=0x1 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val= 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val= 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val=dualcast 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val= 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val=software 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@22 -- # accel_module=software 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val=32 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val=32 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val=1 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val=Yes 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val= 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:44.120 16:17:53 -- accel/accel.sh@20 -- # val= 00:06:44.120 16:17:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # IFS=: 00:06:44.120 16:17:53 -- accel/accel.sh@19 -- # read -r var val 00:06:45.497 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.497 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.497 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.497 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.497 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.497 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.497 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.497 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.497 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.497 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.497 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.497 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.497 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.497 16:17:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.497 16:17:54 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:45.497 16:17:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.497 00:06:45.497 real 0m1.396s 00:06:45.497 user 0m1.258s 00:06:45.497 sys 0m0.143s 00:06:45.498 16:17:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:45.498 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.498 ************************************ 00:06:45.498 END TEST accel_dualcast 00:06:45.498 ************************************ 00:06:45.498 16:17:54 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:45.498 16:17:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:45.498 16:17:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.498 16:17:54 -- common/autotest_common.sh@10 -- # set +x 00:06:45.498 ************************************ 00:06:45.498 START TEST accel_compare 00:06:45.498 ************************************ 00:06:45.498 16:17:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:45.498 16:17:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.498 16:17:54 -- accel/accel.sh@17 -- # local accel_module 00:06:45.498 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.498 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.498 16:17:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:45.498 16:17:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:45.498 16:17:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.498 16:17:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.498 16:17:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.498 16:17:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.498 16:17:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.498 16:17:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.498 16:17:54 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.498 16:17:54 -- accel/accel.sh@41 -- # jq -r . 00:06:45.498 [2024-04-26 16:17:54.442714] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:45.498 [2024-04-26 16:17:54.442774] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid344824 ] 00:06:45.498 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.498 [2024-04-26 16:17:54.516781] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.757 [2024-04-26 16:17:54.600307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val=0x1 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val=compare 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val=software 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@22 -- # accel_module=software 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val=32 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val=32 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val=1 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val=Yes 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:45.757 16:17:54 -- accel/accel.sh@20 -- # val= 00:06:45.757 16:17:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # IFS=: 00:06:45.757 16:17:54 -- accel/accel.sh@19 -- # read -r var val 00:06:47.136 16:17:55 -- accel/accel.sh@20 -- # val= 00:06:47.136 16:17:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # IFS=: 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # read -r var val 00:06:47.136 16:17:55 -- accel/accel.sh@20 -- # val= 00:06:47.136 16:17:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # IFS=: 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # read -r var val 00:06:47.136 16:17:55 -- accel/accel.sh@20 -- # val= 00:06:47.136 16:17:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # IFS=: 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # read -r var val 00:06:47.136 16:17:55 -- accel/accel.sh@20 -- # val= 00:06:47.136 16:17:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # IFS=: 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # read -r var val 00:06:47.136 16:17:55 -- accel/accel.sh@20 -- # val= 00:06:47.136 16:17:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # IFS=: 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # read -r var val 00:06:47.136 16:17:55 -- accel/accel.sh@20 -- # val= 00:06:47.136 16:17:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # IFS=: 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # read -r var val 00:06:47.136 16:17:55 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.136 16:17:55 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:47.136 16:17:55 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.136 00:06:47.136 real 0m1.399s 00:06:47.136 user 0m1.255s 00:06:47.136 sys 0m0.148s 00:06:47.136 16:17:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.136 16:17:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.136 ************************************ 00:06:47.136 END TEST accel_compare 00:06:47.136 ************************************ 00:06:47.136 16:17:55 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:47.136 16:17:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:47.136 16:17:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.136 16:17:55 -- common/autotest_common.sh@10 -- # set +x 00:06:47.136 ************************************ 00:06:47.136 START TEST accel_xor 00:06:47.136 ************************************ 00:06:47.136 16:17:55 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:47.136 16:17:55 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.136 16:17:55 -- accel/accel.sh@17 -- # local accel_module 00:06:47.136 16:17:55 -- accel/accel.sh@19 -- # IFS=: 00:06:47.136 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.136 16:17:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:47.136 16:17:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:47.136 16:17:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.136 16:17:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.136 16:17:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.136 16:17:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.136 16:17:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.136 16:17:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.136 16:17:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.136 16:17:56 -- accel/accel.sh@41 -- # jq -r . 00:06:47.136 [2024-04-26 16:17:56.025710] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:47.136 [2024-04-26 16:17:56.025767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345039 ] 00:06:47.136 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.136 [2024-04-26 16:17:56.096766] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.396 [2024-04-26 16:17:56.178089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val= 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val= 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val=0x1 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val= 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val= 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val=xor 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val=2 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val= 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val=software 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val=32 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val=32 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val=1 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val=Yes 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val= 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:47.396 16:17:56 -- accel/accel.sh@20 -- # val= 00:06:47.396 16:17:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # IFS=: 00:06:47.396 16:17:56 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.775 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.775 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.775 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.775 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.775 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.775 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.775 16:17:57 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:48.775 16:17:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.775 00:06:48.775 real 0m1.381s 00:06:48.775 user 0m1.240s 00:06:48.775 sys 0m0.146s 00:06:48.775 16:17:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.775 16:17:57 -- common/autotest_common.sh@10 -- # set +x 00:06:48.775 ************************************ 00:06:48.775 END TEST accel_xor 00:06:48.775 ************************************ 00:06:48.775 16:17:57 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:48.775 16:17:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:48.775 16:17:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.775 16:17:57 -- common/autotest_common.sh@10 -- # set +x 00:06:48.775 ************************************ 00:06:48.775 START TEST accel_xor 00:06:48.775 ************************************ 00:06:48.775 16:17:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:48.775 16:17:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.775 16:17:57 -- accel/accel.sh@17 -- # local accel_module 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:48.775 16:17:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:48.775 16:17:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.775 16:17:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.775 16:17:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.775 16:17:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.775 16:17:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.775 16:17:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.775 16:17:57 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.775 16:17:57 -- accel/accel.sh@41 -- # jq -r . 00:06:48.775 [2024-04-26 16:17:57.591294] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:48.775 [2024-04-26 16:17:57.591372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345274 ] 00:06:48.775 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.775 [2024-04-26 16:17:57.665163] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.775 [2024-04-26 16:17:57.745374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.775 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.775 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.775 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.775 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val=0x1 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val=xor 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val=3 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val=software 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@22 -- # accel_module=software 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val=32 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val=32 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val=1 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.776 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:48.776 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:48.776 16:17:57 -- accel/accel.sh@20 -- # val=Yes 00:06:49.035 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.035 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:49.035 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:49.035 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:49.035 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.035 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:49.035 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:49.035 16:17:57 -- accel/accel.sh@20 -- # val= 00:06:49.035 16:17:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.035 16:17:57 -- accel/accel.sh@19 -- # IFS=: 00:06:49.035 16:17:57 -- accel/accel.sh@19 -- # read -r var val 00:06:49.973 16:17:58 -- accel/accel.sh@20 -- # val= 00:06:49.973 16:17:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # IFS=: 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # read -r var val 00:06:49.973 16:17:58 -- accel/accel.sh@20 -- # val= 00:06:49.973 16:17:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # IFS=: 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # read -r var val 00:06:49.973 16:17:58 -- accel/accel.sh@20 -- # val= 00:06:49.973 16:17:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # IFS=: 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # read -r var val 00:06:49.973 16:17:58 -- accel/accel.sh@20 -- # val= 00:06:49.973 16:17:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # IFS=: 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # read -r var val 00:06:49.973 16:17:58 -- accel/accel.sh@20 -- # val= 00:06:49.973 16:17:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # IFS=: 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # read -r var val 00:06:49.973 16:17:58 -- accel/accel.sh@20 -- # val= 00:06:49.973 16:17:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # IFS=: 00:06:49.973 16:17:58 -- accel/accel.sh@19 -- # read -r var val 00:06:49.973 16:17:58 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.973 16:17:58 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:49.973 16:17:58 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.973 00:06:49.973 real 0m1.398s 00:06:49.973 user 0m1.262s 00:06:49.973 sys 0m0.141s 00:06:49.973 16:17:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.973 16:17:58 -- common/autotest_common.sh@10 -- # set +x 00:06:49.973 ************************************ 00:06:49.973 END TEST accel_xor 00:06:49.973 ************************************ 00:06:49.973 16:17:58 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:49.973 16:17:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:49.973 16:17:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.973 16:17:58 -- common/autotest_common.sh@10 -- # set +x 00:06:50.232 ************************************ 00:06:50.232 START TEST accel_dif_verify 00:06:50.232 ************************************ 00:06:50.232 16:17:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:50.232 16:17:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.232 16:17:59 -- accel/accel.sh@17 -- # local accel_module 00:06:50.232 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.232 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.232 16:17:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:50.232 16:17:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.232 16:17:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.232 16:17:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.232 16:17:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.232 16:17:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.232 16:17:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.232 16:17:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.232 16:17:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:50.232 16:17:59 -- accel/accel.sh@41 -- # jq -r . 00:06:50.232 [2024-04-26 16:17:59.155774] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:50.233 [2024-04-26 16:17:59.155832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345569 ] 00:06:50.233 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.233 [2024-04-26 16:17:59.229505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.492 [2024-04-26 16:17:59.311525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val= 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val= 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val=0x1 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val= 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val= 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val=dif_verify 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val= 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.492 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.492 16:17:59 -- accel/accel.sh@20 -- # val=software 00:06:50.492 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.493 16:17:59 -- accel/accel.sh@22 -- # accel_module=software 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.493 16:17:59 -- accel/accel.sh@20 -- # val=32 00:06:50.493 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.493 16:17:59 -- accel/accel.sh@20 -- # val=32 00:06:50.493 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.493 16:17:59 -- accel/accel.sh@20 -- # val=1 00:06:50.493 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.493 16:17:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.493 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.493 16:17:59 -- accel/accel.sh@20 -- # val=No 00:06:50.493 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.493 16:17:59 -- accel/accel.sh@20 -- # val= 00:06:50.493 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:50.493 16:17:59 -- accel/accel.sh@20 -- # val= 00:06:50.493 16:17:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # IFS=: 00:06:50.493 16:17:59 -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:51.872 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:51.872 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:51.872 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:51.872 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:51.872 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:51.872 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 16:18:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.872 16:18:00 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:51.872 16:18:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.872 00:06:51.872 real 0m1.400s 00:06:51.872 user 0m1.266s 00:06:51.872 sys 0m0.139s 00:06:51.872 16:18:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:51.872 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:06:51.872 ************************************ 00:06:51.872 END TEST accel_dif_verify 00:06:51.872 ************************************ 00:06:51.872 16:18:00 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:51.872 16:18:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:51.872 16:18:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:51.872 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:06:51.872 ************************************ 00:06:51.872 START TEST accel_dif_generate 00:06:51.872 ************************************ 00:06:51.872 16:18:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:51.872 16:18:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.872 16:18:00 -- accel/accel.sh@17 -- # local accel_module 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 16:18:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:51.872 16:18:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:51.872 16:18:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.872 16:18:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.872 16:18:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.872 16:18:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.872 16:18:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.872 16:18:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.872 16:18:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:51.872 16:18:00 -- accel/accel.sh@41 -- # jq -r . 00:06:51.872 [2024-04-26 16:18:00.767362] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:51.872 [2024-04-26 16:18:00.767426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid345883 ] 00:06:51.872 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.872 [2024-04-26 16:18:00.840745] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.131 [2024-04-26 16:18:00.922897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.131 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:52.131 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.131 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.131 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.131 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:52.131 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.131 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.131 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.131 16:18:00 -- accel/accel.sh@20 -- # val=0x1 00:06:52.131 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.131 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.131 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.131 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:52.131 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.131 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.131 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val=dif_generate 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val=software 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@22 -- # accel_module=software 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val=32 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val=32 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val=1 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val=No 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:52.132 16:18:00 -- accel/accel.sh@20 -- # val= 00:06:52.132 16:18:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # IFS=: 00:06:52.132 16:18:00 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.511 16:18:02 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:53.511 16:18:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.511 00:06:53.511 real 0m1.386s 00:06:53.511 user 0m1.248s 00:06:53.511 sys 0m0.140s 00:06:53.511 16:18:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:53.511 16:18:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.511 ************************************ 00:06:53.511 END TEST accel_dif_generate 00:06:53.511 ************************************ 00:06:53.511 16:18:02 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:53.511 16:18:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:53.511 16:18:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:53.511 16:18:02 -- common/autotest_common.sh@10 -- # set +x 00:06:53.511 ************************************ 00:06:53.511 START TEST accel_dif_generate_copy 00:06:53.511 ************************************ 00:06:53.511 16:18:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:53.511 16:18:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.511 16:18:02 -- accel/accel.sh@17 -- # local accel_module 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:53.511 16:18:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:53.511 16:18:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.511 16:18:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.511 16:18:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.511 16:18:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.511 16:18:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.511 16:18:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.511 16:18:02 -- accel/accel.sh@40 -- # local IFS=, 00:06:53.511 16:18:02 -- accel/accel.sh@41 -- # jq -r . 00:06:53.511 [2024-04-26 16:18:02.301101] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:53.511 [2024-04-26 16:18:02.301142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346170 ] 00:06:53.511 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.511 [2024-04-26 16:18:02.371377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.511 [2024-04-26 16:18:02.451107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val=0x1 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val=software 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@22 -- # accel_module=software 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val=32 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val=32 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val=1 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val=No 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:53.511 16:18:02 -- accel/accel.sh@20 -- # val= 00:06:53.511 16:18:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # IFS=: 00:06:53.511 16:18:02 -- accel/accel.sh@19 -- # read -r var val 00:06:54.888 16:18:03 -- accel/accel.sh@20 -- # val= 00:06:54.888 16:18:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # IFS=: 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # read -r var val 00:06:54.888 16:18:03 -- accel/accel.sh@20 -- # val= 00:06:54.888 16:18:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # IFS=: 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # read -r var val 00:06:54.888 16:18:03 -- accel/accel.sh@20 -- # val= 00:06:54.888 16:18:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # IFS=: 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # read -r var val 00:06:54.888 16:18:03 -- accel/accel.sh@20 -- # val= 00:06:54.888 16:18:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # IFS=: 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # read -r var val 00:06:54.888 16:18:03 -- accel/accel.sh@20 -- # val= 00:06:54.888 16:18:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # IFS=: 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # read -r var val 00:06:54.888 16:18:03 -- accel/accel.sh@20 -- # val= 00:06:54.888 16:18:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # IFS=: 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # read -r var val 00:06:54.888 16:18:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.888 16:18:03 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:54.888 16:18:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.888 00:06:54.888 real 0m1.382s 00:06:54.888 user 0m1.250s 00:06:54.888 sys 0m0.134s 00:06:54.888 16:18:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:54.888 16:18:03 -- common/autotest_common.sh@10 -- # set +x 00:06:54.888 ************************************ 00:06:54.888 END TEST accel_dif_generate_copy 00:06:54.888 ************************************ 00:06:54.888 16:18:03 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:54.888 16:18:03 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:54.888 16:18:03 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:54.888 16:18:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.888 16:18:03 -- common/autotest_common.sh@10 -- # set +x 00:06:54.888 ************************************ 00:06:54.888 START TEST accel_comp 00:06:54.888 ************************************ 00:06:54.888 16:18:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:54.888 16:18:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.888 16:18:03 -- accel/accel.sh@17 -- # local accel_module 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # IFS=: 00:06:54.888 16:18:03 -- accel/accel.sh@19 -- # read -r var val 00:06:54.888 16:18:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:54.888 16:18:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:54.888 16:18:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.888 16:18:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.888 16:18:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.888 16:18:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.888 16:18:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.888 16:18:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.888 16:18:03 -- accel/accel.sh@40 -- # local IFS=, 00:06:54.888 16:18:03 -- accel/accel.sh@41 -- # jq -r . 00:06:54.888 [2024-04-26 16:18:03.905199] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:54.888 [2024-04-26 16:18:03.905260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346377 ] 00:06:55.148 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.148 [2024-04-26 16:18:03.981048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.148 [2024-04-26 16:18:04.067182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.148 16:18:04 -- accel/accel.sh@20 -- # val= 00:06:55.148 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.148 16:18:04 -- accel/accel.sh@20 -- # val= 00:06:55.148 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.148 16:18:04 -- accel/accel.sh@20 -- # val= 00:06:55.148 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.148 16:18:04 -- accel/accel.sh@20 -- # val=0x1 00:06:55.148 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.148 16:18:04 -- accel/accel.sh@20 -- # val= 00:06:55.148 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.148 16:18:04 -- accel/accel.sh@20 -- # val= 00:06:55.148 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.148 16:18:04 -- accel/accel.sh@20 -- # val=compress 00:06:55.148 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.148 16:18:04 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.148 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.148 16:18:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.148 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val= 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val=software 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@22 -- # accel_module=software 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val=32 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val=32 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val=1 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val=No 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val= 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:55.149 16:18:04 -- accel/accel.sh@20 -- # val= 00:06:55.149 16:18:04 -- accel/accel.sh@21 -- # case "$var" in 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # IFS=: 00:06:55.149 16:18:04 -- accel/accel.sh@19 -- # read -r var val 00:06:56.526 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.526 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.526 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.526 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.526 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.526 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.526 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.526 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.526 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.526 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.526 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.526 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.526 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.526 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.526 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.527 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.527 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.527 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.527 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.527 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.527 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.527 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.527 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.527 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.527 16:18:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.527 16:18:05 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:56.527 16:18:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.527 00:06:56.527 real 0m1.407s 00:06:56.527 user 0m1.255s 00:06:56.527 sys 0m0.154s 00:06:56.527 16:18:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:56.527 16:18:05 -- common/autotest_common.sh@10 -- # set +x 00:06:56.527 ************************************ 00:06:56.527 END TEST accel_comp 00:06:56.527 ************************************ 00:06:56.527 16:18:05 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:56.527 16:18:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:56.527 16:18:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.527 16:18:05 -- common/autotest_common.sh@10 -- # set +x 00:06:56.527 ************************************ 00:06:56.527 START TEST accel_decomp 00:06:56.527 ************************************ 00:06:56.527 16:18:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:56.527 16:18:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.527 16:18:05 -- accel/accel.sh@17 -- # local accel_module 00:06:56.527 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.527 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.527 16:18:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:56.527 16:18:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:56.527 16:18:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.527 16:18:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.527 16:18:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.527 16:18:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.527 16:18:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.527 16:18:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.527 16:18:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:56.527 16:18:05 -- accel/accel.sh@41 -- # jq -r . 00:06:56.527 [2024-04-26 16:18:05.499529] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:56.527 [2024-04-26 16:18:05.499586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346868 ] 00:06:56.527 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.786 [2024-04-26 16:18:05.574438] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.786 [2024-04-26 16:18:05.659504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val=0x1 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val=decompress 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val=software 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@22 -- # accel_module=software 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val=32 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val=32 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val=1 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val=Yes 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:56.786 16:18:05 -- accel/accel.sh@20 -- # val= 00:06:56.786 16:18:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # IFS=: 00:06:56.786 16:18:05 -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 16:18:06 -- accel/accel.sh@20 -- # val= 00:06:58.166 16:18:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 16:18:06 -- accel/accel.sh@20 -- # val= 00:06:58.166 16:18:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 16:18:06 -- accel/accel.sh@20 -- # val= 00:06:58.166 16:18:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 16:18:06 -- accel/accel.sh@20 -- # val= 00:06:58.166 16:18:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 16:18:06 -- accel/accel.sh@20 -- # val= 00:06:58.166 16:18:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 16:18:06 -- accel/accel.sh@20 -- # val= 00:06:58.166 16:18:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 16:18:06 -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 16:18:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.166 16:18:06 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:58.166 16:18:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.166 00:06:58.166 real 0m1.403s 00:06:58.166 user 0m1.269s 00:06:58.166 sys 0m0.139s 00:06:58.166 16:18:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.166 16:18:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.166 ************************************ 00:06:58.166 END TEST accel_decomp 00:06:58.166 ************************************ 00:06:58.166 16:18:06 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:58.166 16:18:06 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:58.166 16:18:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.166 16:18:06 -- common/autotest_common.sh@10 -- # set +x 00:06:58.166 ************************************ 00:06:58.166 START TEST accel_decmop_full 00:06:58.166 ************************************ 00:06:58.166 16:18:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:58.166 16:18:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.166 16:18:07 -- accel/accel.sh@17 -- # local accel_module 00:06:58.166 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.166 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.166 16:18:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:58.166 16:18:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:58.166 16:18:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.166 16:18:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.166 16:18:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.166 16:18:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.166 16:18:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.166 16:18:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.166 16:18:07 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.166 16:18:07 -- accel/accel.sh@41 -- # jq -r . 00:06:58.166 [2024-04-26 16:18:07.083541] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:58.166 [2024-04-26 16:18:07.083601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid347194 ] 00:06:58.166 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.166 [2024-04-26 16:18:07.156753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.426 [2024-04-26 16:18:07.237404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val= 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val= 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val= 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val=0x1 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val= 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val= 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val=decompress 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val= 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val=software 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val=32 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val=32 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val=1 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val=Yes 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val= 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:58.426 16:18:07 -- accel/accel.sh@20 -- # val= 00:06:58.426 16:18:07 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # IFS=: 00:06:58.426 16:18:07 -- accel/accel.sh@19 -- # read -r var val 00:06:59.803 16:18:08 -- accel/accel.sh@20 -- # val= 00:06:59.803 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:06:59.803 16:18:08 -- accel/accel.sh@20 -- # val= 00:06:59.803 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:06:59.803 16:18:08 -- accel/accel.sh@20 -- # val= 00:06:59.803 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:06:59.803 16:18:08 -- accel/accel.sh@20 -- # val= 00:06:59.803 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:06:59.803 16:18:08 -- accel/accel.sh@20 -- # val= 00:06:59.803 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:06:59.803 16:18:08 -- accel/accel.sh@20 -- # val= 00:06:59.803 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:06:59.803 16:18:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.803 16:18:08 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.803 16:18:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.803 00:06:59.803 real 0m1.403s 00:06:59.803 user 0m1.258s 00:06:59.803 sys 0m0.149s 00:06:59.803 16:18:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.803 16:18:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.803 ************************************ 00:06:59.803 END TEST accel_decmop_full 00:06:59.803 ************************************ 00:06:59.803 16:18:08 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.803 16:18:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:59.803 16:18:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.803 16:18:08 -- common/autotest_common.sh@10 -- # set +x 00:06:59.803 ************************************ 00:06:59.803 START TEST accel_decomp_mcore 00:06:59.803 ************************************ 00:06:59.803 16:18:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.803 16:18:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.803 16:18:08 -- accel/accel.sh@17 -- # local accel_module 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:06:59.803 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:06:59.803 16:18:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.804 16:18:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.804 16:18:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.804 16:18:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.804 16:18:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.804 16:18:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.804 16:18:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.804 16:18:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.804 16:18:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:59.804 16:18:08 -- accel/accel.sh@41 -- # jq -r . 00:06:59.804 [2024-04-26 16:18:08.679005] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:06:59.804 [2024-04-26 16:18:08.679067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid347486 ] 00:06:59.804 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.804 [2024-04-26 16:18:08.753938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.062 [2024-04-26 16:18:08.840902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.062 [2024-04-26 16:18:08.840989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.062 [2024-04-26 16:18:08.841063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.062 [2024-04-26 16:18:08.841065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val= 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val= 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val= 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val=0xf 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val= 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val= 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val=decompress 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val= 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val=software 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val=32 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val=32 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val=1 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val=Yes 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val= 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:00.062 16:18:08 -- accel/accel.sh@20 -- # val= 00:07:00.062 16:18:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # IFS=: 00:07:00.062 16:18:08 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.439 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.439 16:18:10 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.439 16:18:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.439 00:07:01.439 real 0m1.427s 00:07:01.439 user 0m4.667s 00:07:01.439 sys 0m0.157s 00:07:01.439 16:18:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:01.439 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:07:01.439 ************************************ 00:07:01.439 END TEST accel_decomp_mcore 00:07:01.439 ************************************ 00:07:01.439 16:18:10 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.439 16:18:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:01.439 16:18:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:01.439 16:18:10 -- common/autotest_common.sh@10 -- # set +x 00:07:01.439 ************************************ 00:07:01.439 START TEST accel_decomp_full_mcore 00:07:01.439 ************************************ 00:07:01.439 16:18:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.439 16:18:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.439 16:18:10 -- accel/accel.sh@17 -- # local accel_module 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.439 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.439 16:18:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.439 16:18:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:01.439 16:18:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.439 16:18:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.439 16:18:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.439 16:18:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.439 16:18:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.439 16:18:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.439 16:18:10 -- accel/accel.sh@40 -- # local IFS=, 00:07:01.439 16:18:10 -- accel/accel.sh@41 -- # jq -r . 00:07:01.439 [2024-04-26 16:18:10.319378] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:07:01.439 [2024-04-26 16:18:10.319441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid347775 ] 00:07:01.439 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.439 [2024-04-26 16:18:10.394599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.698 [2024-04-26 16:18:10.486535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.698 [2024-04-26 16:18:10.486552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.698 [2024-04-26 16:18:10.486636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.698 [2024-04-26 16:18:10.486635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val=0xf 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val=decompress 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val=software 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@22 -- # accel_module=software 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val=32 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val=32 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val=1 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val=Yes 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:01.698 16:18:10 -- accel/accel.sh@20 -- # val= 00:07:01.698 16:18:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # IFS=: 00:07:01.698 16:18:10 -- accel/accel.sh@19 -- # read -r var val 00:07:03.075 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.075 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.075 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.075 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.075 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.075 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.075 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.076 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.076 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.076 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.076 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.076 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.076 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@20 -- # val= 00:07:03.076 16:18:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.076 16:18:11 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.076 16:18:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.076 00:07:03.076 real 0m1.436s 00:07:03.076 user 0m4.690s 00:07:03.076 sys 0m0.156s 00:07:03.076 16:18:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:03.076 16:18:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.076 ************************************ 00:07:03.076 END TEST accel_decomp_full_mcore 00:07:03.076 ************************************ 00:07:03.076 16:18:11 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.076 16:18:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:03.076 16:18:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.076 16:18:11 -- common/autotest_common.sh@10 -- # set +x 00:07:03.076 ************************************ 00:07:03.076 START TEST accel_decomp_mthread 00:07:03.076 ************************************ 00:07:03.076 16:18:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.076 16:18:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.076 16:18:11 -- accel/accel.sh@17 -- # local accel_module 00:07:03.076 16:18:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # IFS=: 00:07:03.076 16:18:11 -- accel/accel.sh@19 -- # read -r var val 00:07:03.076 16:18:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:03.076 16:18:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.076 16:18:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.076 16:18:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.076 16:18:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.076 16:18:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.076 16:18:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.076 16:18:11 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.076 16:18:11 -- accel/accel.sh@41 -- # jq -r . 00:07:03.076 [2024-04-26 16:18:11.941895] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:07:03.076 [2024-04-26 16:18:11.941957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid347990 ] 00:07:03.076 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.076 [2024-04-26 16:18:12.009219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.076 [2024-04-26 16:18:12.088809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val= 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val= 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val= 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val=0x1 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val= 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val= 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val=decompress 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val= 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val=software 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@22 -- # accel_module=software 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val=32 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.335 16:18:12 -- accel/accel.sh@20 -- # val=32 00:07:03.335 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.335 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.336 16:18:12 -- accel/accel.sh@20 -- # val=2 00:07:03.336 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.336 16:18:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.336 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.336 16:18:12 -- accel/accel.sh@20 -- # val=Yes 00:07:03.336 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.336 16:18:12 -- accel/accel.sh@20 -- # val= 00:07:03.336 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:03.336 16:18:12 -- accel/accel.sh@20 -- # val= 00:07:03.336 16:18:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # IFS=: 00:07:03.336 16:18:12 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.716 16:18:13 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.716 16:18:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.716 00:07:04.716 real 0m1.385s 00:07:04.716 user 0m1.256s 00:07:04.716 sys 0m0.145s 00:07:04.716 16:18:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:04.716 16:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.716 ************************************ 00:07:04.716 END TEST accel_decomp_mthread 00:07:04.716 ************************************ 00:07:04.716 16:18:13 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.716 16:18:13 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:04.716 16:18:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:04.716 16:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:04.716 ************************************ 00:07:04.716 START TEST accel_deomp_full_mthread 00:07:04.716 ************************************ 00:07:04.716 16:18:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.716 16:18:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.716 16:18:13 -- accel/accel.sh@17 -- # local accel_module 00:07:04.716 16:18:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:04.716 16:18:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.716 16:18:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.716 16:18:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.716 16:18:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.716 16:18:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.716 16:18:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.716 16:18:13 -- accel/accel.sh@40 -- # local IFS=, 00:07:04.716 16:18:13 -- accel/accel.sh@41 -- # jq -r . 00:07:04.716 [2024-04-26 16:18:13.510137] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:07:04.716 [2024-04-26 16:18:13.510183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348201 ] 00:07:04.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.716 [2024-04-26 16:18:13.577505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.716 [2024-04-26 16:18:13.656799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.716 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.716 16:18:13 -- accel/accel.sh@20 -- # val=0x1 00:07:04.716 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val=decompress 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val=software 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val=32 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val=32 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val=2 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val=Yes 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:04.717 16:18:13 -- accel/accel.sh@20 -- # val= 00:07:04.717 16:18:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # IFS=: 00:07:04.717 16:18:13 -- accel/accel.sh@19 -- # read -r var val 00:07:06.094 16:18:14 -- accel/accel.sh@20 -- # val= 00:07:06.094 16:18:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # IFS=: 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # read -r var val 00:07:06.094 16:18:14 -- accel/accel.sh@20 -- # val= 00:07:06.094 16:18:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # IFS=: 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # read -r var val 00:07:06.094 16:18:14 -- accel/accel.sh@20 -- # val= 00:07:06.094 16:18:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # IFS=: 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # read -r var val 00:07:06.094 16:18:14 -- accel/accel.sh@20 -- # val= 00:07:06.094 16:18:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # IFS=: 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # read -r var val 00:07:06.094 16:18:14 -- accel/accel.sh@20 -- # val= 00:07:06.094 16:18:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # IFS=: 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # read -r var val 00:07:06.094 16:18:14 -- accel/accel.sh@20 -- # val= 00:07:06.094 16:18:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # IFS=: 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # read -r var val 00:07:06.094 16:18:14 -- accel/accel.sh@20 -- # val= 00:07:06.094 16:18:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # IFS=: 00:07:06.094 16:18:14 -- accel/accel.sh@19 -- # read -r var val 00:07:06.094 16:18:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.094 16:18:14 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.094 16:18:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.094 00:07:06.094 real 0m1.408s 00:07:06.094 user 0m1.286s 00:07:06.094 sys 0m0.136s 00:07:06.094 16:18:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:06.094 16:18:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.094 ************************************ 00:07:06.094 END TEST accel_deomp_full_mthread 00:07:06.094 ************************************ 00:07:06.094 16:18:14 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:06.095 16:18:14 -- accel/accel.sh@137 -- # build_accel_config 00:07:06.095 16:18:14 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:06.095 16:18:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.095 16:18:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.095 16:18:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.095 16:18:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.095 16:18:14 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:06.095 16:18:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.095 16:18:14 -- accel/accel.sh@40 -- # local IFS=, 00:07:06.095 16:18:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.095 16:18:14 -- accel/accel.sh@41 -- # jq -r . 00:07:06.095 16:18:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.095 ************************************ 00:07:06.095 START TEST accel_dif_functional_tests 00:07:06.095 ************************************ 00:07:06.095 16:18:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:06.095 [2024-04-26 16:18:15.108814] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:07:06.095 [2024-04-26 16:18:15.108861] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348409 ] 00:07:06.354 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.354 [2024-04-26 16:18:15.181123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.354 [2024-04-26 16:18:15.261359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.354 [2024-04-26 16:18:15.261432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.354 [2024-04-26 16:18:15.261435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.354 00:07:06.354 00:07:06.354 CUnit - A unit testing framework for C - Version 2.1-3 00:07:06.354 http://cunit.sourceforge.net/ 00:07:06.354 00:07:06.354 00:07:06.354 Suite: accel_dif 00:07:06.354 Test: verify: DIF generated, GUARD check ...passed 00:07:06.354 Test: verify: DIF generated, APPTAG check ...passed 00:07:06.354 Test: verify: DIF generated, REFTAG check ...passed 00:07:06.354 Test: verify: DIF not generated, GUARD check ...[2024-04-26 16:18:15.340883] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:06.354 [2024-04-26 16:18:15.340934] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:06.354 passed 00:07:06.354 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 16:18:15.340984] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:06.354 [2024-04-26 16:18:15.341001] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:06.354 passed 00:07:06.354 Test: verify: DIF not generated, REFTAG check ...[2024-04-26 16:18:15.341022] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:06.354 [2024-04-26 16:18:15.341040] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:06.354 passed 00:07:06.354 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:06.354 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-26 16:18:15.341084] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:06.354 passed 00:07:06.354 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:06.354 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:06.354 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:06.354 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 16:18:15.341194] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:06.354 passed 00:07:06.354 Test: generate copy: DIF generated, GUARD check ...passed 00:07:06.354 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:06.354 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:06.354 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:06.354 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:06.354 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:06.354 Test: generate copy: iovecs-len validate ...[2024-04-26 16:18:15.341369] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:06.354 passed 00:07:06.354 Test: generate copy: buffer alignment validate ...passed 00:07:06.354 00:07:06.354 Run Summary: Type Total Ran Passed Failed Inactive 00:07:06.354 suites 1 1 n/a 0 0 00:07:06.354 tests 20 20 20 0 0 00:07:06.354 asserts 204 204 204 0 n/a 00:07:06.354 00:07:06.354 Elapsed time = 0.002 seconds 00:07:06.613 00:07:06.613 real 0m0.475s 00:07:06.613 user 0m0.671s 00:07:06.613 sys 0m0.163s 00:07:06.613 16:18:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:06.613 16:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:06.613 ************************************ 00:07:06.613 END TEST accel_dif_functional_tests 00:07:06.613 ************************************ 00:07:06.613 00:07:06.613 real 0m35.696s 00:07:06.613 user 0m36.578s 00:07:06.613 sys 0m6.579s 00:07:06.613 16:18:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:06.613 16:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:06.613 ************************************ 00:07:06.613 END TEST accel 00:07:06.613 ************************************ 00:07:06.613 16:18:15 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:06.613 16:18:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:06.613 16:18:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:06.613 16:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:06.872 ************************************ 00:07:06.872 START TEST accel_rpc 00:07:06.872 ************************************ 00:07:06.872 16:18:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:06.872 * Looking for test storage... 00:07:06.872 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:06.872 16:18:15 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:06.872 16:18:15 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=348654 00:07:06.872 16:18:15 -- accel/accel_rpc.sh@15 -- # waitforlisten 348654 00:07:06.872 16:18:15 -- common/autotest_common.sh@817 -- # '[' -z 348654 ']' 00:07:06.872 16:18:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.872 16:18:15 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:06.872 16:18:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.872 16:18:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.872 16:18:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.872 16:18:15 -- common/autotest_common.sh@10 -- # set +x 00:07:07.130 [2024-04-26 16:18:15.932692] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:07:07.130 [2024-04-26 16:18:15.932743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348654 ] 00:07:07.130 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.130 [2024-04-26 16:18:16.003524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.130 [2024-04-26 16:18:16.080220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.068 16:18:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:08.068 16:18:16 -- common/autotest_common.sh@850 -- # return 0 00:07:08.068 16:18:16 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:08.068 16:18:16 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:08.068 16:18:16 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:08.068 16:18:16 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:08.068 16:18:16 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:08.068 16:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.068 16:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.068 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:08.068 ************************************ 00:07:08.068 START TEST accel_assign_opcode 00:07:08.068 ************************************ 00:07:08.068 16:18:16 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:08.068 16:18:16 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:08.068 16:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.068 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:08.068 [2024-04-26 16:18:16.854469] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:08.068 16:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.068 16:18:16 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:08.068 16:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.068 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:08.068 [2024-04-26 16:18:16.862484] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:08.068 16:18:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.068 16:18:16 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:08.068 16:18:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.068 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:08.068 16:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.068 16:18:17 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:08.068 16:18:17 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:08.068 16:18:17 -- accel/accel_rpc.sh@42 -- # grep software 00:07:08.068 16:18:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:08.068 16:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.068 16:18:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:08.327 software 00:07:08.327 00:07:08.327 real 0m0.250s 00:07:08.327 user 0m0.041s 00:07:08.327 sys 0m0.014s 00:07:08.327 16:18:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.327 16:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.327 ************************************ 00:07:08.327 END TEST accel_assign_opcode 00:07:08.327 ************************************ 00:07:08.327 16:18:17 -- accel/accel_rpc.sh@55 -- # killprocess 348654 00:07:08.327 16:18:17 -- common/autotest_common.sh@936 -- # '[' -z 348654 ']' 00:07:08.327 16:18:17 -- common/autotest_common.sh@940 -- # kill -0 348654 00:07:08.327 16:18:17 -- common/autotest_common.sh@941 -- # uname 00:07:08.327 16:18:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.327 16:18:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 348654 00:07:08.327 16:18:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.327 16:18:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.327 16:18:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 348654' 00:07:08.327 killing process with pid 348654 00:07:08.327 16:18:17 -- common/autotest_common.sh@955 -- # kill 348654 00:07:08.328 16:18:17 -- common/autotest_common.sh@960 -- # wait 348654 00:07:08.587 00:07:08.587 real 0m1.771s 00:07:08.587 user 0m1.797s 00:07:08.587 sys 0m0.542s 00:07:08.587 16:18:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:08.587 16:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.587 ************************************ 00:07:08.587 END TEST accel_rpc 00:07:08.587 ************************************ 00:07:08.587 16:18:17 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.587 16:18:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.587 16:18:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.587 16:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.846 ************************************ 00:07:08.846 START TEST app_cmdline 00:07:08.846 ************************************ 00:07:08.846 16:18:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:08.846 * Looking for test storage... 00:07:08.846 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:08.846 16:18:17 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:08.846 16:18:17 -- app/cmdline.sh@17 -- # spdk_tgt_pid=348931 00:07:08.846 16:18:17 -- app/cmdline.sh@18 -- # waitforlisten 348931 00:07:08.846 16:18:17 -- common/autotest_common.sh@817 -- # '[' -z 348931 ']' 00:07:08.846 16:18:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.846 16:18:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:08.846 16:18:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.846 16:18:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:08.846 16:18:17 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:08.846 16:18:17 -- common/autotest_common.sh@10 -- # set +x 00:07:08.846 [2024-04-26 16:18:17.860785] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:07:08.846 [2024-04-26 16:18:17.860851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348931 ] 00:07:09.105 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.105 [2024-04-26 16:18:17.933834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.105 [2024-04-26 16:18:18.015414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.673 16:18:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:09.673 16:18:18 -- common/autotest_common.sh@850 -- # return 0 00:07:09.673 16:18:18 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:09.932 { 00:07:09.932 "version": "SPDK v24.05-pre git sha1 bba4d07b0", 00:07:09.932 "fields": { 00:07:09.932 "major": 24, 00:07:09.932 "minor": 5, 00:07:09.932 "patch": 0, 00:07:09.932 "suffix": "-pre", 00:07:09.932 "commit": "bba4d07b0" 00:07:09.932 } 00:07:09.932 } 00:07:09.932 16:18:18 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:09.932 16:18:18 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:09.932 16:18:18 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:09.932 16:18:18 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:09.932 16:18:18 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:09.932 16:18:18 -- app/cmdline.sh@26 -- # sort 00:07:09.932 16:18:18 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:09.932 16:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:09.932 16:18:18 -- common/autotest_common.sh@10 -- # set +x 00:07:09.932 16:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:09.932 16:18:18 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:09.932 16:18:18 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:09.932 16:18:18 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.932 16:18:18 -- common/autotest_common.sh@638 -- # local es=0 00:07:09.932 16:18:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:09.932 16:18:18 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:09.932 16:18:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:09.932 16:18:18 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:09.932 16:18:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:09.932 16:18:18 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:09.932 16:18:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:09.932 16:18:18 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:09.932 16:18:18 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:09.932 16:18:18 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:10.191 request: 00:07:10.191 { 00:07:10.191 "method": "env_dpdk_get_mem_stats", 00:07:10.191 "req_id": 1 00:07:10.191 } 00:07:10.191 Got JSON-RPC error response 00:07:10.191 response: 00:07:10.191 { 00:07:10.191 "code": -32601, 00:07:10.191 "message": "Method not found" 00:07:10.191 } 00:07:10.191 16:18:19 -- common/autotest_common.sh@641 -- # es=1 00:07:10.191 16:18:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:10.191 16:18:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:10.191 16:18:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:10.191 16:18:19 -- app/cmdline.sh@1 -- # killprocess 348931 00:07:10.191 16:18:19 -- common/autotest_common.sh@936 -- # '[' -z 348931 ']' 00:07:10.191 16:18:19 -- common/autotest_common.sh@940 -- # kill -0 348931 00:07:10.191 16:18:19 -- common/autotest_common.sh@941 -- # uname 00:07:10.191 16:18:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.191 16:18:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 348931 00:07:10.191 16:18:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.191 16:18:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.191 16:18:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 348931' 00:07:10.191 killing process with pid 348931 00:07:10.191 16:18:19 -- common/autotest_common.sh@955 -- # kill 348931 00:07:10.191 16:18:19 -- common/autotest_common.sh@960 -- # wait 348931 00:07:10.450 00:07:10.450 real 0m1.709s 00:07:10.450 user 0m1.980s 00:07:10.450 sys 0m0.491s 00:07:10.450 16:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.450 16:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.450 ************************************ 00:07:10.450 END TEST app_cmdline 00:07:10.450 ************************************ 00:07:10.709 16:18:19 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:10.709 16:18:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.709 16:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.709 16:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.709 ************************************ 00:07:10.709 START TEST version 00:07:10.709 ************************************ 00:07:10.709 16:18:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:10.969 * Looking for test storage... 00:07:10.969 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:10.969 16:18:19 -- app/version.sh@17 -- # get_header_version major 00:07:10.969 16:18:19 -- app/version.sh@14 -- # cut -f2 00:07:10.969 16:18:19 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:10.969 16:18:19 -- app/version.sh@14 -- # tr -d '"' 00:07:10.969 16:18:19 -- app/version.sh@17 -- # major=24 00:07:10.969 16:18:19 -- app/version.sh@18 -- # get_header_version minor 00:07:10.969 16:18:19 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:10.969 16:18:19 -- app/version.sh@14 -- # cut -f2 00:07:10.969 16:18:19 -- app/version.sh@14 -- # tr -d '"' 00:07:10.969 16:18:19 -- app/version.sh@18 -- # minor=5 00:07:10.969 16:18:19 -- app/version.sh@19 -- # get_header_version patch 00:07:10.969 16:18:19 -- app/version.sh@14 -- # cut -f2 00:07:10.969 16:18:19 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:10.969 16:18:19 -- app/version.sh@14 -- # tr -d '"' 00:07:10.969 16:18:19 -- app/version.sh@19 -- # patch=0 00:07:10.969 16:18:19 -- app/version.sh@20 -- # get_header_version suffix 00:07:10.969 16:18:19 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:10.969 16:18:19 -- app/version.sh@14 -- # tr -d '"' 00:07:10.969 16:18:19 -- app/version.sh@14 -- # cut -f2 00:07:10.969 16:18:19 -- app/version.sh@20 -- # suffix=-pre 00:07:10.969 16:18:19 -- app/version.sh@22 -- # version=24.5 00:07:10.969 16:18:19 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:10.969 16:18:19 -- app/version.sh@28 -- # version=24.5rc0 00:07:10.969 16:18:19 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:10.969 16:18:19 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:10.969 16:18:19 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:10.969 16:18:19 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:10.969 00:07:10.969 real 0m0.194s 00:07:10.969 user 0m0.092s 00:07:10.969 sys 0m0.146s 00:07:10.969 16:18:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.969 16:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.969 ************************************ 00:07:10.969 END TEST version 00:07:10.969 ************************************ 00:07:10.969 16:18:19 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:10.969 16:18:19 -- spdk/autotest.sh@194 -- # uname -s 00:07:10.969 16:18:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:10.969 16:18:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.969 16:18:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:10.969 16:18:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:10.969 16:18:19 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:07:10.969 16:18:19 -- spdk/autotest.sh@258 -- # timing_exit lib 00:07:10.969 16:18:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:10.969 16:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:10.969 16:18:19 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:10.969 16:18:19 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:07:10.969 16:18:19 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:07:10.969 16:18:19 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:07:10.969 16:18:19 -- spdk/autotest.sh@281 -- # '[' rdma = rdma ']' 00:07:10.969 16:18:19 -- spdk/autotest.sh@282 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:10.969 16:18:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:10.969 16:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.969 16:18:19 -- common/autotest_common.sh@10 -- # set +x 00:07:11.229 ************************************ 00:07:11.229 START TEST nvmf_rdma 00:07:11.229 ************************************ 00:07:11.229 16:18:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:11.229 * Looking for test storage... 00:07:11.229 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:11.229 16:18:20 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:11.229 16:18:20 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:11.229 16:18:20 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.229 16:18:20 -- nvmf/common.sh@7 -- # uname -s 00:07:11.229 16:18:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.229 16:18:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.229 16:18:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.229 16:18:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.229 16:18:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.229 16:18:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.229 16:18:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.229 16:18:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.229 16:18:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.229 16:18:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.229 16:18:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:11.229 16:18:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:11.229 16:18:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.229 16:18:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.229 16:18:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.229 16:18:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.229 16:18:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:11.229 16:18:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.229 16:18:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.229 16:18:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.229 16:18:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.229 16:18:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.229 16:18:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.229 16:18:20 -- paths/export.sh@5 -- # export PATH 00:07:11.229 16:18:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.229 16:18:20 -- nvmf/common.sh@47 -- # : 0 00:07:11.229 16:18:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.229 16:18:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.229 16:18:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.229 16:18:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.229 16:18:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.229 16:18:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.229 16:18:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.229 16:18:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.229 16:18:20 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:11.229 16:18:20 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:11.229 16:18:20 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:11.229 16:18:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:11.229 16:18:20 -- common/autotest_common.sh@10 -- # set +x 00:07:11.229 16:18:20 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:11.229 16:18:20 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:11.229 16:18:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:11.229 16:18:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.229 16:18:20 -- common/autotest_common.sh@10 -- # set +x 00:07:11.492 ************************************ 00:07:11.492 START TEST nvmf_example 00:07:11.492 ************************************ 00:07:11.492 16:18:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:11.492 * Looking for test storage... 00:07:11.492 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:11.492 16:18:20 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:11.492 16:18:20 -- nvmf/common.sh@7 -- # uname -s 00:07:11.492 16:18:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.492 16:18:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.492 16:18:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.492 16:18:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.492 16:18:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.492 16:18:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.492 16:18:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.492 16:18:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.492 16:18:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.492 16:18:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.752 16:18:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:11.752 16:18:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:11.752 16:18:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.752 16:18:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.752 16:18:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.752 16:18:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.752 16:18:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:11.752 16:18:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.752 16:18:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.752 16:18:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.752 16:18:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.752 16:18:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.752 16:18:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.752 16:18:20 -- paths/export.sh@5 -- # export PATH 00:07:11.752 16:18:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.752 16:18:20 -- nvmf/common.sh@47 -- # : 0 00:07:11.752 16:18:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:11.752 16:18:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:11.752 16:18:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.752 16:18:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.752 16:18:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.752 16:18:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:11.752 16:18:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:11.752 16:18:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:11.752 16:18:20 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:11.752 16:18:20 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:11.752 16:18:20 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:11.752 16:18:20 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:11.752 16:18:20 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:11.752 16:18:20 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:11.752 16:18:20 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:11.752 16:18:20 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:11.752 16:18:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:11.752 16:18:20 -- common/autotest_common.sh@10 -- # set +x 00:07:11.752 16:18:20 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:11.752 16:18:20 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:11.752 16:18:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.752 16:18:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:11.753 16:18:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:11.753 16:18:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:11.753 16:18:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.753 16:18:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.753 16:18:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.753 16:18:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:11.753 16:18:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:11.753 16:18:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:11.753 16:18:20 -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 16:18:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:18.323 16:18:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:18.323 16:18:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:18.323 16:18:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:18.323 16:18:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:18.323 16:18:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:18.323 16:18:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:18.323 16:18:26 -- nvmf/common.sh@295 -- # net_devs=() 00:07:18.323 16:18:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:18.323 16:18:26 -- nvmf/common.sh@296 -- # e810=() 00:07:18.323 16:18:26 -- nvmf/common.sh@296 -- # local -ga e810 00:07:18.323 16:18:26 -- nvmf/common.sh@297 -- # x722=() 00:07:18.323 16:18:26 -- nvmf/common.sh@297 -- # local -ga x722 00:07:18.323 16:18:26 -- nvmf/common.sh@298 -- # mlx=() 00:07:18.323 16:18:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:18.323 16:18:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.323 16:18:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:18.323 16:18:26 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:18.323 16:18:26 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:18.323 16:18:26 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:18.323 16:18:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:18.323 16:18:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:07:18.323 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:07:18.323 16:18:26 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:18.323 16:18:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:07:18.323 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:07:18.323 16:18:26 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:18.323 16:18:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:18.323 16:18:26 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.323 16:18:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:18.323 16:18:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.323 16:18:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:18.323 Found net devices under 0000:18:00.0: mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.323 16:18:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.323 16:18:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:18.323 16:18:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.323 16:18:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:18.323 Found net devices under 0000:18:00.1: mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.323 16:18:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:18.323 16:18:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:18.323 16:18:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:18.323 16:18:26 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:18.323 16:18:26 -- nvmf/common.sh@58 -- # uname 00:07:18.323 16:18:26 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:18.323 16:18:26 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:18.323 16:18:26 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:18.323 16:18:26 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:18.323 16:18:26 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:18.323 16:18:26 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:18.323 16:18:26 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:18.323 16:18:26 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:18.323 16:18:26 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:18.323 16:18:26 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:18.323 16:18:26 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:18.323 16:18:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:18.323 16:18:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:18.323 16:18:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:18.323 16:18:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:18.323 16:18:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:18.323 16:18:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@105 -- # continue 2 00:07:18.323 16:18:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@105 -- # continue 2 00:07:18.323 16:18:26 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:18.323 16:18:26 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:18.323 16:18:26 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:18.323 16:18:26 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:18.323 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:18.323 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:07:18.323 altname enp24s0f0np0 00:07:18.323 altname ens785f0np0 00:07:18.323 inet 192.168.100.8/24 scope global mlx_0_0 00:07:18.323 valid_lft forever preferred_lft forever 00:07:18.323 16:18:26 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:18.323 16:18:26 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:18.323 16:18:26 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:18.323 16:18:26 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:18.323 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:18.323 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:07:18.323 altname enp24s0f1np1 00:07:18.323 altname ens785f1np1 00:07:18.323 inet 192.168.100.9/24 scope global mlx_0_1 00:07:18.323 valid_lft forever preferred_lft forever 00:07:18.323 16:18:26 -- nvmf/common.sh@411 -- # return 0 00:07:18.323 16:18:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:18.323 16:18:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:18.323 16:18:26 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:18.323 16:18:26 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:18.323 16:18:26 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:18.323 16:18:26 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:18.323 16:18:26 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:18.323 16:18:26 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:18.323 16:18:26 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:18.323 16:18:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@105 -- # continue 2 00:07:18.323 16:18:26 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:18.323 16:18:26 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:18.323 16:18:26 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@105 -- # continue 2 00:07:18.323 16:18:26 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:18.323 16:18:26 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:18.323 16:18:26 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:18.323 16:18:26 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:18.323 16:18:26 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:18.323 16:18:26 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:18.323 192.168.100.9' 00:07:18.323 16:18:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:18.323 192.168.100.9' 00:07:18.323 16:18:26 -- nvmf/common.sh@446 -- # head -n 1 00:07:18.323 16:18:26 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:18.323 16:18:26 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:18.323 192.168.100.9' 00:07:18.323 16:18:26 -- nvmf/common.sh@447 -- # tail -n +2 00:07:18.323 16:18:26 -- nvmf/common.sh@447 -- # head -n 1 00:07:18.323 16:18:26 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:18.323 16:18:26 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:18.323 16:18:26 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:18.323 16:18:26 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:18.323 16:18:26 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:18.323 16:18:26 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:18.323 16:18:26 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:18.323 16:18:26 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:18.323 16:18:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:18.323 16:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:18.323 16:18:26 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:18.323 16:18:26 -- target/nvmf_example.sh@34 -- # nvmfpid=352321 00:07:18.323 16:18:26 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:18.323 16:18:26 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:18.323 16:18:26 -- target/nvmf_example.sh@36 -- # waitforlisten 352321 00:07:18.323 16:18:26 -- common/autotest_common.sh@817 -- # '[' -z 352321 ']' 00:07:18.323 16:18:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.323 16:18:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:18.323 16:18:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.323 16:18:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:18.324 16:18:26 -- common/autotest_common.sh@10 -- # set +x 00:07:18.324 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.583 16:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:18.583 16:18:27 -- common/autotest_common.sh@850 -- # return 0 00:07:18.583 16:18:27 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:18.583 16:18:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:18.583 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.583 16:18:27 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:18.583 16:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.583 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.843 16:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.843 16:18:27 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:18.843 16:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.843 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.843 16:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.843 16:18:27 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:18.843 16:18:27 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:18.843 16:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.843 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.843 16:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.843 16:18:27 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:18.843 16:18:27 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:18.843 16:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.843 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.843 16:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.843 16:18:27 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:18.843 16:18:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:18.843 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:07:18.843 16:18:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:18.843 16:18:27 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:18.843 16:18:27 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:18.843 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.083 Initializing NVMe Controllers 00:07:31.083 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.084 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:31.084 Initialization complete. Launching workers. 00:07:31.084 ======================================================== 00:07:31.084 Latency(us) 00:07:31.084 Device Information : IOPS MiB/s Average min max 00:07:31.084 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25872.10 101.06 2473.48 622.36 14105.80 00:07:31.084 ======================================================== 00:07:31.084 Total : 25872.10 101.06 2473.48 622.36 14105.80 00:07:31.084 00:07:31.084 16:18:38 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:31.084 16:18:38 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:31.084 16:18:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:31.084 16:18:38 -- nvmf/common.sh@117 -- # sync 00:07:31.084 16:18:38 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:31.084 16:18:38 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:31.084 16:18:38 -- nvmf/common.sh@120 -- # set +e 00:07:31.084 16:18:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.084 16:18:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:31.084 rmmod nvme_rdma 00:07:31.084 rmmod nvme_fabrics 00:07:31.084 16:18:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.084 16:18:38 -- nvmf/common.sh@124 -- # set -e 00:07:31.084 16:18:38 -- nvmf/common.sh@125 -- # return 0 00:07:31.084 16:18:38 -- nvmf/common.sh@478 -- # '[' -n 352321 ']' 00:07:31.084 16:18:38 -- nvmf/common.sh@479 -- # killprocess 352321 00:07:31.084 16:18:38 -- common/autotest_common.sh@936 -- # '[' -z 352321 ']' 00:07:31.084 16:18:38 -- common/autotest_common.sh@940 -- # kill -0 352321 00:07:31.084 16:18:38 -- common/autotest_common.sh@941 -- # uname 00:07:31.084 16:18:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:31.084 16:18:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 352321 00:07:31.084 16:18:39 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:31.084 16:18:39 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:31.084 16:18:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 352321' 00:07:31.084 killing process with pid 352321 00:07:31.084 16:18:39 -- common/autotest_common.sh@955 -- # kill 352321 00:07:31.084 16:18:39 -- common/autotest_common.sh@960 -- # wait 352321 00:07:31.084 [2024-04-26 16:18:39.085551] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:31.084 nvmf threads initialize successfully 00:07:31.084 bdev subsystem init successfully 00:07:31.084 created a nvmf target service 00:07:31.084 create targets's poll groups done 00:07:31.084 all subsystems of target started 00:07:31.084 nvmf target is running 00:07:31.084 all subsystems of target stopped 00:07:31.084 destroy targets's poll groups done 00:07:31.084 destroyed the nvmf target service 00:07:31.084 bdev subsystem finish successfully 00:07:31.084 nvmf threads destroy successfully 00:07:31.084 16:18:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:31.084 16:18:39 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:07:31.084 16:18:39 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:31.084 16:18:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:31.084 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:07:31.084 00:07:31.084 real 0m18.923s 00:07:31.084 user 0m52.065s 00:07:31.084 sys 0m4.959s 00:07:31.084 16:18:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.084 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:07:31.084 ************************************ 00:07:31.084 END TEST nvmf_example 00:07:31.084 ************************************ 00:07:31.084 16:18:39 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:31.084 16:18:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.084 16:18:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.084 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:07:31.084 ************************************ 00:07:31.084 START TEST nvmf_filesystem 00:07:31.084 ************************************ 00:07:31.084 16:18:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:31.084 * Looking for test storage... 00:07:31.084 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:31.084 16:18:39 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:31.084 16:18:39 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:31.084 16:18:39 -- common/autotest_common.sh@34 -- # set -e 00:07:31.084 16:18:39 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:31.084 16:18:39 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:31.084 16:18:39 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:31.084 16:18:39 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:31.084 16:18:39 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:31.084 16:18:39 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:31.084 16:18:39 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:31.084 16:18:39 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:31.084 16:18:39 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:31.084 16:18:39 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:31.084 16:18:39 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:31.084 16:18:39 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:31.084 16:18:39 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:31.084 16:18:39 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:31.084 16:18:39 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:31.084 16:18:39 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:31.084 16:18:39 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:31.084 16:18:39 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:31.084 16:18:39 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:31.084 16:18:39 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:31.084 16:18:39 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:31.084 16:18:39 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:31.084 16:18:39 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:31.084 16:18:39 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:31.084 16:18:39 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:31.084 16:18:39 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:31.084 16:18:39 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:31.084 16:18:39 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:31.084 16:18:39 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:31.084 16:18:39 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:31.084 16:18:39 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:31.084 16:18:39 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:31.084 16:18:39 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:31.084 16:18:39 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:31.084 16:18:39 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:31.084 16:18:39 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:31.084 16:18:39 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:31.084 16:18:39 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:31.084 16:18:39 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:31.084 16:18:39 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:31.084 16:18:39 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:31.084 16:18:39 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:31.084 16:18:39 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:31.084 16:18:39 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:31.084 16:18:39 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:31.084 16:18:39 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:31.084 16:18:39 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:31.084 16:18:39 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:31.084 16:18:39 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:31.084 16:18:39 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:31.084 16:18:39 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:31.084 16:18:39 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:31.084 16:18:39 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:31.084 16:18:39 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:31.084 16:18:39 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:31.084 16:18:39 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:31.084 16:18:39 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:31.084 16:18:39 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:31.084 16:18:39 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:31.084 16:18:39 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:31.084 16:18:39 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:31.084 16:18:39 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:31.084 16:18:39 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:31.084 16:18:39 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:31.084 16:18:39 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:31.084 16:18:39 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:31.084 16:18:39 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:31.084 16:18:39 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:31.084 16:18:39 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:31.084 16:18:39 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:31.084 16:18:39 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:31.084 16:18:39 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:31.084 16:18:39 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:31.084 16:18:39 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:31.084 16:18:39 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:31.084 16:18:39 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:31.084 16:18:39 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:31.084 16:18:39 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:31.084 16:18:39 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:31.085 16:18:39 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:31.085 16:18:39 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:31.085 16:18:39 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:31.085 16:18:39 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:31.085 16:18:39 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:31.085 16:18:39 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:31.085 16:18:39 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:31.085 16:18:39 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:31.085 16:18:39 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:31.085 16:18:39 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:31.085 16:18:39 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:31.085 16:18:39 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:31.085 16:18:39 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:31.085 16:18:39 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:31.085 16:18:39 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:31.085 16:18:39 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:31.085 16:18:39 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:31.085 16:18:39 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:31.085 16:18:39 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:31.085 16:18:39 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:31.085 16:18:39 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:31.085 16:18:39 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:31.085 16:18:39 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:31.085 16:18:39 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:31.085 #define SPDK_CONFIG_H 00:07:31.085 #define SPDK_CONFIG_APPS 1 00:07:31.085 #define SPDK_CONFIG_ARCH native 00:07:31.085 #undef SPDK_CONFIG_ASAN 00:07:31.085 #undef SPDK_CONFIG_AVAHI 00:07:31.085 #undef SPDK_CONFIG_CET 00:07:31.085 #define SPDK_CONFIG_COVERAGE 1 00:07:31.085 #define SPDK_CONFIG_CROSS_PREFIX 00:07:31.085 #undef SPDK_CONFIG_CRYPTO 00:07:31.085 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:31.085 #undef SPDK_CONFIG_CUSTOMOCF 00:07:31.085 #undef SPDK_CONFIG_DAOS 00:07:31.085 #define SPDK_CONFIG_DAOS_DIR 00:07:31.085 #define SPDK_CONFIG_DEBUG 1 00:07:31.085 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:31.085 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:31.085 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:31.085 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:31.085 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:31.085 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:31.085 #define SPDK_CONFIG_EXAMPLES 1 00:07:31.085 #undef SPDK_CONFIG_FC 00:07:31.085 #define SPDK_CONFIG_FC_PATH 00:07:31.085 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:31.085 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:31.085 #undef SPDK_CONFIG_FUSE 00:07:31.085 #undef SPDK_CONFIG_FUZZER 00:07:31.085 #define SPDK_CONFIG_FUZZER_LIB 00:07:31.085 #undef SPDK_CONFIG_GOLANG 00:07:31.085 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:31.085 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:31.085 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:31.085 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:31.085 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:31.085 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:31.085 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:31.085 #define SPDK_CONFIG_IDXD 1 00:07:31.085 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:31.085 #undef SPDK_CONFIG_IPSEC_MB 00:07:31.085 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:31.085 #define SPDK_CONFIG_ISAL 1 00:07:31.085 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:31.085 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:31.085 #define SPDK_CONFIG_LIBDIR 00:07:31.085 #undef SPDK_CONFIG_LTO 00:07:31.085 #define SPDK_CONFIG_MAX_LCORES 00:07:31.085 #define SPDK_CONFIG_NVME_CUSE 1 00:07:31.085 #undef SPDK_CONFIG_OCF 00:07:31.085 #define SPDK_CONFIG_OCF_PATH 00:07:31.085 #define SPDK_CONFIG_OPENSSL_PATH 00:07:31.085 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:31.085 #define SPDK_CONFIG_PGO_DIR 00:07:31.085 #undef SPDK_CONFIG_PGO_USE 00:07:31.085 #define SPDK_CONFIG_PREFIX /usr/local 00:07:31.085 #undef SPDK_CONFIG_RAID5F 00:07:31.085 #undef SPDK_CONFIG_RBD 00:07:31.085 #define SPDK_CONFIG_RDMA 1 00:07:31.085 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:31.085 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:31.085 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:31.085 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:31.085 #define SPDK_CONFIG_SHARED 1 00:07:31.085 #undef SPDK_CONFIG_SMA 00:07:31.085 #define SPDK_CONFIG_TESTS 1 00:07:31.085 #undef SPDK_CONFIG_TSAN 00:07:31.085 #define SPDK_CONFIG_UBLK 1 00:07:31.085 #define SPDK_CONFIG_UBSAN 1 00:07:31.085 #undef SPDK_CONFIG_UNIT_TESTS 00:07:31.085 #undef SPDK_CONFIG_URING 00:07:31.085 #define SPDK_CONFIG_URING_PATH 00:07:31.085 #undef SPDK_CONFIG_URING_ZNS 00:07:31.085 #undef SPDK_CONFIG_USDT 00:07:31.085 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:31.085 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:31.085 #undef SPDK_CONFIG_VFIO_USER 00:07:31.085 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:31.085 #define SPDK_CONFIG_VHOST 1 00:07:31.085 #define SPDK_CONFIG_VIRTIO 1 00:07:31.085 #undef SPDK_CONFIG_VTUNE 00:07:31.085 #define SPDK_CONFIG_VTUNE_DIR 00:07:31.085 #define SPDK_CONFIG_WERROR 1 00:07:31.085 #define SPDK_CONFIG_WPDK_DIR 00:07:31.085 #undef SPDK_CONFIG_XNVME 00:07:31.085 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:31.085 16:18:39 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:31.085 16:18:39 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:31.085 16:18:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.085 16:18:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.085 16:18:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.085 16:18:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.085 16:18:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.085 16:18:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.085 16:18:39 -- paths/export.sh@5 -- # export PATH 00:07:31.085 16:18:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.085 16:18:39 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:31.085 16:18:39 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:31.085 16:18:39 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:31.085 16:18:39 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:31.085 16:18:39 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:31.085 16:18:39 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:31.085 16:18:39 -- pm/common@67 -- # TEST_TAG=N/A 00:07:31.085 16:18:39 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:31.085 16:18:39 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:31.085 16:18:39 -- pm/common@71 -- # uname -s 00:07:31.085 16:18:39 -- pm/common@71 -- # PM_OS=Linux 00:07:31.085 16:18:39 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:31.085 16:18:39 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:31.085 16:18:39 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:31.085 16:18:39 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:31.085 16:18:39 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:31.085 16:18:39 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:31.085 16:18:39 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:31.085 16:18:39 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:31.085 16:18:39 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:31.085 16:18:39 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:31.085 16:18:39 -- common/autotest_common.sh@57 -- # : 0 00:07:31.085 16:18:39 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:31.085 16:18:39 -- common/autotest_common.sh@61 -- # : 0 00:07:31.085 16:18:39 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:31.085 16:18:39 -- common/autotest_common.sh@63 -- # : 0 00:07:31.085 16:18:39 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:31.085 16:18:39 -- common/autotest_common.sh@65 -- # : 1 00:07:31.085 16:18:39 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:31.085 16:18:39 -- common/autotest_common.sh@67 -- # : 0 00:07:31.085 16:18:39 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:31.085 16:18:39 -- common/autotest_common.sh@69 -- # : 00:07:31.085 16:18:39 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:31.086 16:18:39 -- common/autotest_common.sh@71 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:31.086 16:18:39 -- common/autotest_common.sh@73 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:31.086 16:18:39 -- common/autotest_common.sh@75 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:31.086 16:18:39 -- common/autotest_common.sh@77 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:31.086 16:18:39 -- common/autotest_common.sh@79 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:31.086 16:18:39 -- common/autotest_common.sh@81 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:31.086 16:18:39 -- common/autotest_common.sh@83 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:31.086 16:18:39 -- common/autotest_common.sh@85 -- # : 1 00:07:31.086 16:18:39 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:31.086 16:18:39 -- common/autotest_common.sh@87 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:31.086 16:18:39 -- common/autotest_common.sh@89 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:31.086 16:18:39 -- common/autotest_common.sh@91 -- # : 1 00:07:31.086 16:18:39 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:31.086 16:18:39 -- common/autotest_common.sh@93 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:31.086 16:18:39 -- common/autotest_common.sh@95 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:31.086 16:18:39 -- common/autotest_common.sh@97 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:31.086 16:18:39 -- common/autotest_common.sh@99 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:31.086 16:18:39 -- common/autotest_common.sh@101 -- # : rdma 00:07:31.086 16:18:39 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:31.086 16:18:39 -- common/autotest_common.sh@103 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:31.086 16:18:39 -- common/autotest_common.sh@105 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:31.086 16:18:39 -- common/autotest_common.sh@107 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:31.086 16:18:39 -- common/autotest_common.sh@109 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:31.086 16:18:39 -- common/autotest_common.sh@111 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:31.086 16:18:39 -- common/autotest_common.sh@113 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:31.086 16:18:39 -- common/autotest_common.sh@115 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:31.086 16:18:39 -- common/autotest_common.sh@117 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:31.086 16:18:39 -- common/autotest_common.sh@119 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:31.086 16:18:39 -- common/autotest_common.sh@121 -- # : 1 00:07:31.086 16:18:39 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:31.086 16:18:39 -- common/autotest_common.sh@123 -- # : 00:07:31.086 16:18:39 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:31.086 16:18:39 -- common/autotest_common.sh@125 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:31.086 16:18:39 -- common/autotest_common.sh@127 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:31.086 16:18:39 -- common/autotest_common.sh@129 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:31.086 16:18:39 -- common/autotest_common.sh@131 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:31.086 16:18:39 -- common/autotest_common.sh@133 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:31.086 16:18:39 -- common/autotest_common.sh@135 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:31.086 16:18:39 -- common/autotest_common.sh@137 -- # : 00:07:31.086 16:18:39 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:31.086 16:18:39 -- common/autotest_common.sh@139 -- # : true 00:07:31.086 16:18:39 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:31.086 16:18:39 -- common/autotest_common.sh@141 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:31.086 16:18:39 -- common/autotest_common.sh@143 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:31.086 16:18:39 -- common/autotest_common.sh@145 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:31.086 16:18:39 -- common/autotest_common.sh@147 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:31.086 16:18:39 -- common/autotest_common.sh@149 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:31.086 16:18:39 -- common/autotest_common.sh@151 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:31.086 16:18:39 -- common/autotest_common.sh@153 -- # : mlx5 00:07:31.086 16:18:39 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:31.086 16:18:39 -- common/autotest_common.sh@155 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:31.086 16:18:39 -- common/autotest_common.sh@157 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:31.086 16:18:39 -- common/autotest_common.sh@159 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:31.086 16:18:39 -- common/autotest_common.sh@161 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:31.086 16:18:39 -- common/autotest_common.sh@163 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:31.086 16:18:39 -- common/autotest_common.sh@166 -- # : 00:07:31.086 16:18:39 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:31.086 16:18:39 -- common/autotest_common.sh@168 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:31.086 16:18:39 -- common/autotest_common.sh@170 -- # : 0 00:07:31.086 16:18:39 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:31.086 16:18:39 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:31.086 16:18:39 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:31.086 16:18:39 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:31.086 16:18:39 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:31.086 16:18:39 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:31.086 16:18:39 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:31.086 16:18:39 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:31.086 16:18:39 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:31.086 16:18:39 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:31.086 16:18:39 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:31.086 16:18:39 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:31.086 16:18:39 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:31.086 16:18:39 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:31.086 16:18:39 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:31.086 16:18:39 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:31.086 16:18:39 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:31.086 16:18:39 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:31.086 16:18:39 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:31.086 16:18:39 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:31.086 16:18:39 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:31.086 16:18:39 -- common/autotest_common.sh@199 -- # cat 00:07:31.086 16:18:39 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:31.086 16:18:39 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:31.086 16:18:39 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:31.087 16:18:39 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:31.087 16:18:39 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:31.087 16:18:39 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:31.087 16:18:39 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:31.087 16:18:39 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:31.087 16:18:39 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:31.087 16:18:39 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:31.087 16:18:39 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:31.087 16:18:39 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:31.087 16:18:39 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:31.087 16:18:39 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:31.087 16:18:39 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:31.087 16:18:39 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:31.087 16:18:39 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:31.087 16:18:39 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:31.087 16:18:39 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:31.087 16:18:39 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:31.087 16:18:39 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:31.087 16:18:39 -- common/autotest_common.sh@252 -- # valgrind= 00:07:31.087 16:18:39 -- common/autotest_common.sh@258 -- # uname -s 00:07:31.087 16:18:39 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:31.087 16:18:39 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:31.087 16:18:39 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:31.087 16:18:39 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:31.087 16:18:39 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:31.087 16:18:39 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j72 00:07:31.087 16:18:39 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:31.087 16:18:39 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:31.087 16:18:39 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:31.087 16:18:39 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:31.087 16:18:39 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:31.087 16:18:39 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:31.087 16:18:39 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:07:31.087 16:18:39 -- common/autotest_common.sh@307 -- # [[ -z 354091 ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@307 -- # kill -0 354091 00:07:31.087 16:18:39 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:31.087 16:18:39 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:31.087 16:18:39 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:31.087 16:18:39 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:31.087 16:18:39 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:31.087 16:18:39 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:31.087 16:18:39 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:31.087 16:18:39 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.4gk53X 00:07:31.087 16:18:39 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:31.087 16:18:39 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4gk53X/tests/target /tmp/spdk.4gk53X 00:07:31.087 16:18:39 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:31.087 16:18:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:31.087 16:18:39 -- common/autotest_common.sh@316 -- # df -T 00:07:31.087 16:18:39 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:31.087 16:18:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:31.087 16:18:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=56431046656 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67027496960 00:07:31.087 16:18:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=10596450304 00:07:31.087 16:18:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=33509036032 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=33513746432 00:07:31.087 16:18:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=4710400 00:07:31.087 16:18:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=13396303872 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=13405499392 00:07:31.087 16:18:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=9195520 00:07:31.087 16:18:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=33513439232 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=33513750528 00:07:31.087 16:18:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=311296 00:07:31.087 16:18:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=6702743552 00:07:31.087 16:18:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6702747648 00:07:31.087 16:18:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:31.087 16:18:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:31.087 16:18:39 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:31.087 * Looking for test storage... 00:07:31.087 16:18:39 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:31.087 16:18:39 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:31.087 16:18:39 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:31.087 16:18:39 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:31.087 16:18:39 -- common/autotest_common.sh@361 -- # mount=/ 00:07:31.087 16:18:39 -- common/autotest_common.sh@363 -- # target_space=56431046656 00:07:31.087 16:18:39 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:31.087 16:18:39 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:31.087 16:18:39 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@370 -- # new_size=12811042816 00:07:31.087 16:18:39 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:31.087 16:18:39 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:31.087 16:18:39 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:31.087 16:18:39 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:31.087 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:31.087 16:18:39 -- common/autotest_common.sh@378 -- # return 0 00:07:31.087 16:18:39 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:31.087 16:18:39 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:31.087 16:18:39 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:31.087 16:18:39 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:31.087 16:18:39 -- common/autotest_common.sh@1673 -- # true 00:07:31.087 16:18:39 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:31.087 16:18:39 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:31.087 16:18:39 -- common/autotest_common.sh@27 -- # exec 00:07:31.087 16:18:39 -- common/autotest_common.sh@29 -- # exec 00:07:31.087 16:18:39 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:31.087 16:18:39 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:31.087 16:18:39 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:31.087 16:18:39 -- common/autotest_common.sh@18 -- # set -x 00:07:31.087 16:18:39 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.087 16:18:39 -- nvmf/common.sh@7 -- # uname -s 00:07:31.087 16:18:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.087 16:18:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.087 16:18:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.087 16:18:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.088 16:18:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.088 16:18:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.088 16:18:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.088 16:18:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.088 16:18:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.088 16:18:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.088 16:18:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:31.088 16:18:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:31.088 16:18:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.088 16:18:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.088 16:18:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.088 16:18:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.088 16:18:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:31.088 16:18:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.088 16:18:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.088 16:18:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.088 16:18:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.088 16:18:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.088 16:18:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.088 16:18:39 -- paths/export.sh@5 -- # export PATH 00:07:31.088 16:18:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.088 16:18:39 -- nvmf/common.sh@47 -- # : 0 00:07:31.088 16:18:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.088 16:18:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.088 16:18:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.088 16:18:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.088 16:18:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.088 16:18:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.088 16:18:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.088 16:18:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.088 16:18:39 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:31.088 16:18:39 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:31.088 16:18:39 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:31.088 16:18:39 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:07:31.088 16:18:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.088 16:18:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:31.088 16:18:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:31.088 16:18:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:31.088 16:18:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.088 16:18:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.088 16:18:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.088 16:18:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:31.088 16:18:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:31.088 16:18:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:31.088 16:18:39 -- common/autotest_common.sh@10 -- # set +x 00:07:37.655 16:18:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:37.655 16:18:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:37.655 16:18:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:37.655 16:18:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:37.655 16:18:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:37.655 16:18:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:37.655 16:18:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:37.655 16:18:45 -- nvmf/common.sh@295 -- # net_devs=() 00:07:37.655 16:18:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:37.655 16:18:45 -- nvmf/common.sh@296 -- # e810=() 00:07:37.655 16:18:45 -- nvmf/common.sh@296 -- # local -ga e810 00:07:37.655 16:18:45 -- nvmf/common.sh@297 -- # x722=() 00:07:37.655 16:18:45 -- nvmf/common.sh@297 -- # local -ga x722 00:07:37.655 16:18:45 -- nvmf/common.sh@298 -- # mlx=() 00:07:37.655 16:18:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:37.655 16:18:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.655 16:18:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:37.655 16:18:45 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:37.655 16:18:45 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:37.655 16:18:45 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:37.655 16:18:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:37.655 16:18:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:07:37.655 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:07:37.655 16:18:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:37.655 16:18:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:07:37.655 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:07:37.655 16:18:45 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:37.655 16:18:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:37.655 16:18:45 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.655 16:18:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:37.655 16:18:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.655 16:18:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:37.655 Found net devices under 0000:18:00.0: mlx_0_0 00:07:37.655 16:18:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.655 16:18:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.655 16:18:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:37.655 16:18:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.655 16:18:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:37.655 Found net devices under 0000:18:00.1: mlx_0_1 00:07:37.655 16:18:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.655 16:18:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:37.655 16:18:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:37.655 16:18:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@409 -- # rdma_device_init 00:07:37.655 16:18:45 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:07:37.655 16:18:45 -- nvmf/common.sh@58 -- # uname 00:07:37.655 16:18:45 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:37.655 16:18:45 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:37.655 16:18:45 -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:37.655 16:18:45 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:37.655 16:18:45 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:37.655 16:18:45 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:37.655 16:18:45 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:37.655 16:18:45 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:37.655 16:18:45 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:07:37.655 16:18:45 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:37.655 16:18:45 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:37.655 16:18:45 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:37.655 16:18:45 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:37.655 16:18:45 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:37.655 16:18:45 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:37.655 16:18:45 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:37.655 16:18:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:37.655 16:18:45 -- nvmf/common.sh@105 -- # continue 2 00:07:37.655 16:18:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.655 16:18:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:37.655 16:18:45 -- nvmf/common.sh@105 -- # continue 2 00:07:37.655 16:18:45 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:37.655 16:18:45 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:37.655 16:18:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:37.655 16:18:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:37.655 16:18:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:37.655 16:18:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:37.655 16:18:45 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:37.655 16:18:45 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:37.655 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:37.655 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:07:37.655 altname enp24s0f0np0 00:07:37.655 altname ens785f0np0 00:07:37.655 inet 192.168.100.8/24 scope global mlx_0_0 00:07:37.655 valid_lft forever preferred_lft forever 00:07:37.655 16:18:45 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:37.655 16:18:45 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:37.655 16:18:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:37.655 16:18:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:37.655 16:18:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:37.655 16:18:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:37.655 16:18:45 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:37.655 16:18:45 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:37.655 16:18:45 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:37.655 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:37.655 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:07:37.655 altname enp24s0f1np1 00:07:37.655 altname ens785f1np1 00:07:37.656 inet 192.168.100.9/24 scope global mlx_0_1 00:07:37.656 valid_lft forever preferred_lft forever 00:07:37.656 16:18:45 -- nvmf/common.sh@411 -- # return 0 00:07:37.656 16:18:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:37.656 16:18:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:37.656 16:18:45 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:07:37.656 16:18:45 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:07:37.656 16:18:45 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:37.656 16:18:45 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:37.656 16:18:45 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:37.656 16:18:45 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:37.656 16:18:45 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:37.656 16:18:45 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:37.656 16:18:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:37.656 16:18:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.656 16:18:45 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:37.656 16:18:45 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:37.656 16:18:45 -- nvmf/common.sh@105 -- # continue 2 00:07:37.656 16:18:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:37.656 16:18:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.656 16:18:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:37.656 16:18:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:37.656 16:18:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:37.656 16:18:45 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:37.656 16:18:45 -- nvmf/common.sh@105 -- # continue 2 00:07:37.656 16:18:45 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:37.656 16:18:45 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:37.656 16:18:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:37.656 16:18:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:37.656 16:18:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:37.656 16:18:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:37.656 16:18:45 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:37.656 16:18:45 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:37.656 16:18:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:37.656 16:18:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:37.656 16:18:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:37.656 16:18:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:37.656 16:18:45 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:07:37.656 192.168.100.9' 00:07:37.656 16:18:45 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:07:37.656 192.168.100.9' 00:07:37.656 16:18:45 -- nvmf/common.sh@446 -- # head -n 1 00:07:37.656 16:18:45 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:37.656 16:18:45 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:07:37.656 192.168.100.9' 00:07:37.656 16:18:45 -- nvmf/common.sh@447 -- # tail -n +2 00:07:37.656 16:18:45 -- nvmf/common.sh@447 -- # head -n 1 00:07:37.656 16:18:45 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:37.656 16:18:45 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:07:37.656 16:18:45 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:37.656 16:18:45 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:07:37.656 16:18:45 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:07:37.656 16:18:45 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:07:37.656 16:18:45 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:37.656 16:18:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:37.656 16:18:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.656 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.656 ************************************ 00:07:37.656 START TEST nvmf_filesystem_no_in_capsule 00:07:37.656 ************************************ 00:07:37.656 16:18:45 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:37.656 16:18:45 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:37.656 16:18:45 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:37.656 16:18:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:37.656 16:18:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:37.656 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.656 16:18:45 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:37.656 16:18:45 -- nvmf/common.sh@470 -- # nvmfpid=357012 00:07:37.656 16:18:45 -- nvmf/common.sh@471 -- # waitforlisten 357012 00:07:37.656 16:18:45 -- common/autotest_common.sh@817 -- # '[' -z 357012 ']' 00:07:37.656 16:18:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.656 16:18:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:37.656 16:18:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.656 16:18:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:37.656 16:18:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.656 [2024-04-26 16:18:45.902576] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:07:37.656 [2024-04-26 16:18:45.902627] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.656 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.656 [2024-04-26 16:18:45.977409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.656 [2024-04-26 16:18:46.066728] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.656 [2024-04-26 16:18:46.066768] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.656 [2024-04-26 16:18:46.066778] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.656 [2024-04-26 16:18:46.066787] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.656 [2024-04-26 16:18:46.066795] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.656 [2024-04-26 16:18:46.066850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.656 [2024-04-26 16:18:46.066935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.656 [2024-04-26 16:18:46.067010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.656 [2024-04-26 16:18:46.067012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.914 16:18:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:37.914 16:18:46 -- common/autotest_common.sh@850 -- # return 0 00:07:37.914 16:18:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:37.914 16:18:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:37.914 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:07:37.914 16:18:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.914 16:18:46 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:37.915 16:18:46 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:37.915 16:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.915 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:07:37.915 [2024-04-26 16:18:46.780273] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:37.915 [2024-04-26 16:18:46.800731] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f3e310/0x1f42800) succeed. 00:07:37.915 [2024-04-26 16:18:46.810919] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f3f950/0x1f83e90) succeed. 00:07:37.915 16:18:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:37.915 16:18:46 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:37.915 16:18:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:37.915 16:18:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.173 Malloc1 00:07:38.173 16:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.173 16:18:47 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:38.173 16:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.173 16:18:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.173 16:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.173 16:18:47 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:38.173 16:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.173 16:18:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.173 16:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.173 16:18:47 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:38.173 16:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.173 16:18:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.173 [2024-04-26 16:18:47.071162] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:38.173 16:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.173 16:18:47 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:38.173 16:18:47 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:38.173 16:18:47 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:38.173 16:18:47 -- common/autotest_common.sh@1366 -- # local bs 00:07:38.173 16:18:47 -- common/autotest_common.sh@1367 -- # local nb 00:07:38.174 16:18:47 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:38.174 16:18:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:38.174 16:18:47 -- common/autotest_common.sh@10 -- # set +x 00:07:38.174 16:18:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:38.174 16:18:47 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:38.174 { 00:07:38.174 "name": "Malloc1", 00:07:38.174 "aliases": [ 00:07:38.174 "29ed37f7-36a3-4046-b48e-67327a740a0d" 00:07:38.174 ], 00:07:38.174 "product_name": "Malloc disk", 00:07:38.174 "block_size": 512, 00:07:38.174 "num_blocks": 1048576, 00:07:38.174 "uuid": "29ed37f7-36a3-4046-b48e-67327a740a0d", 00:07:38.174 "assigned_rate_limits": { 00:07:38.174 "rw_ios_per_sec": 0, 00:07:38.174 "rw_mbytes_per_sec": 0, 00:07:38.174 "r_mbytes_per_sec": 0, 00:07:38.174 "w_mbytes_per_sec": 0 00:07:38.174 }, 00:07:38.174 "claimed": true, 00:07:38.174 "claim_type": "exclusive_write", 00:07:38.174 "zoned": false, 00:07:38.174 "supported_io_types": { 00:07:38.174 "read": true, 00:07:38.174 "write": true, 00:07:38.174 "unmap": true, 00:07:38.174 "write_zeroes": true, 00:07:38.174 "flush": true, 00:07:38.174 "reset": true, 00:07:38.174 "compare": false, 00:07:38.174 "compare_and_write": false, 00:07:38.174 "abort": true, 00:07:38.174 "nvme_admin": false, 00:07:38.174 "nvme_io": false 00:07:38.174 }, 00:07:38.174 "memory_domains": [ 00:07:38.174 { 00:07:38.174 "dma_device_id": "system", 00:07:38.174 "dma_device_type": 1 00:07:38.174 }, 00:07:38.174 { 00:07:38.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.174 "dma_device_type": 2 00:07:38.174 } 00:07:38.174 ], 00:07:38.174 "driver_specific": {} 00:07:38.174 } 00:07:38.174 ]' 00:07:38.174 16:18:47 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:38.174 16:18:47 -- common/autotest_common.sh@1369 -- # bs=512 00:07:38.174 16:18:47 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:38.174 16:18:47 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:38.174 16:18:47 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:38.174 16:18:47 -- common/autotest_common.sh@1374 -- # echo 512 00:07:38.174 16:18:47 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:38.174 16:18:47 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:40.077 16:18:48 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:40.077 16:18:48 -- common/autotest_common.sh@1184 -- # local i=0 00:07:40.077 16:18:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:40.077 16:18:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:40.077 16:18:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:41.981 16:18:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:41.981 16:18:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:41.981 16:18:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:41.981 16:18:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:41.981 16:18:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:41.981 16:18:50 -- common/autotest_common.sh@1194 -- # return 0 00:07:41.981 16:18:50 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:41.981 16:18:50 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:41.981 16:18:50 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:41.981 16:18:50 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:41.981 16:18:50 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:41.981 16:18:50 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:41.981 16:18:50 -- setup/common.sh@80 -- # echo 536870912 00:07:41.981 16:18:50 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:41.981 16:18:50 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:41.981 16:18:50 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:41.981 16:18:50 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:41.981 16:18:50 -- target/filesystem.sh@69 -- # partprobe 00:07:41.981 16:18:50 -- target/filesystem.sh@70 -- # sleep 1 00:07:43.359 16:18:51 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:43.359 16:18:51 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:43.359 16:18:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:43.359 16:18:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.359 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:07:43.359 ************************************ 00:07:43.359 START TEST filesystem_ext4 00:07:43.359 ************************************ 00:07:43.359 16:18:52 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:43.359 16:18:52 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:43.359 16:18:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.359 16:18:52 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:43.359 16:18:52 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:43.359 16:18:52 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:43.359 16:18:52 -- common/autotest_common.sh@914 -- # local i=0 00:07:43.359 16:18:52 -- common/autotest_common.sh@915 -- # local force 00:07:43.359 16:18:52 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:43.359 16:18:52 -- common/autotest_common.sh@918 -- # force=-F 00:07:43.359 16:18:52 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:43.359 mke2fs 1.46.5 (30-Dec-2021) 00:07:43.359 Discarding device blocks: 0/522240 done 00:07:43.359 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:43.359 Filesystem UUID: 7beb443e-676f-4b03-af5c-fa3b12f738fb 00:07:43.359 Superblock backups stored on blocks: 00:07:43.359 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:43.359 00:07:43.359 Allocating group tables: 0/64 done 00:07:43.359 Writing inode tables: 0/64 done 00:07:43.359 Creating journal (8192 blocks): done 00:07:43.359 Writing superblocks and filesystem accounting information: 0/64 done 00:07:43.359 00:07:43.359 16:18:52 -- common/autotest_common.sh@931 -- # return 0 00:07:43.359 16:18:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.359 16:18:52 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.359 16:18:52 -- target/filesystem.sh@25 -- # sync 00:07:43.359 16:18:52 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.359 16:18:52 -- target/filesystem.sh@27 -- # sync 00:07:43.359 16:18:52 -- target/filesystem.sh@29 -- # i=0 00:07:43.359 16:18:52 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.359 16:18:52 -- target/filesystem.sh@37 -- # kill -0 357012 00:07:43.359 16:18:52 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.359 16:18:52 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.359 16:18:52 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.359 16:18:52 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.359 00:07:43.359 real 0m0.198s 00:07:43.359 user 0m0.033s 00:07:43.359 sys 0m0.063s 00:07:43.359 16:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.359 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:07:43.359 ************************************ 00:07:43.359 END TEST filesystem_ext4 00:07:43.359 ************************************ 00:07:43.619 16:18:52 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:43.619 16:18:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:43.619 16:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.619 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:07:43.619 ************************************ 00:07:43.619 START TEST filesystem_btrfs 00:07:43.619 ************************************ 00:07:43.619 16:18:52 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:43.619 16:18:52 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:43.619 16:18:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.619 16:18:52 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:43.619 16:18:52 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:43.619 16:18:52 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:43.619 16:18:52 -- common/autotest_common.sh@914 -- # local i=0 00:07:43.619 16:18:52 -- common/autotest_common.sh@915 -- # local force 00:07:43.619 16:18:52 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:43.619 16:18:52 -- common/autotest_common.sh@920 -- # force=-f 00:07:43.619 16:18:52 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:43.878 btrfs-progs v6.6.2 00:07:43.878 See https://btrfs.readthedocs.io for more information. 00:07:43.878 00:07:43.878 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:43.878 NOTE: several default settings have changed in version 5.15, please make sure 00:07:43.878 this does not affect your deployments: 00:07:43.878 - DUP for metadata (-m dup) 00:07:43.878 - enabled no-holes (-O no-holes) 00:07:43.878 - enabled free-space-tree (-R free-space-tree) 00:07:43.878 00:07:43.878 Label: (null) 00:07:43.878 UUID: 9c8cd6f7-2d43-4854-8335-94b3d13b3e33 00:07:43.878 Node size: 16384 00:07:43.878 Sector size: 4096 00:07:43.878 Filesystem size: 510.00MiB 00:07:43.878 Block group profiles: 00:07:43.878 Data: single 8.00MiB 00:07:43.878 Metadata: DUP 32.00MiB 00:07:43.878 System: DUP 8.00MiB 00:07:43.878 SSD detected: yes 00:07:43.878 Zoned device: no 00:07:43.878 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:43.878 Runtime features: free-space-tree 00:07:43.878 Checksum: crc32c 00:07:43.878 Number of devices: 1 00:07:43.878 Devices: 00:07:43.878 ID SIZE PATH 00:07:43.878 1 510.00MiB /dev/nvme0n1p1 00:07:43.878 00:07:43.878 16:18:52 -- common/autotest_common.sh@931 -- # return 0 00:07:43.878 16:18:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.878 16:18:52 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.878 16:18:52 -- target/filesystem.sh@25 -- # sync 00:07:43.878 16:18:52 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.878 16:18:52 -- target/filesystem.sh@27 -- # sync 00:07:43.878 16:18:52 -- target/filesystem.sh@29 -- # i=0 00:07:43.878 16:18:52 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.878 16:18:52 -- target/filesystem.sh@37 -- # kill -0 357012 00:07:43.878 16:18:52 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.878 16:18:52 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.878 16:18:52 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.878 16:18:52 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.878 00:07:43.878 real 0m0.303s 00:07:43.878 user 0m0.023s 00:07:43.878 sys 0m0.186s 00:07:43.878 16:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.878 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:07:43.878 ************************************ 00:07:43.878 END TEST filesystem_btrfs 00:07:43.878 ************************************ 00:07:44.138 16:18:52 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:44.138 16:18:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:44.138 16:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.138 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:07:44.138 ************************************ 00:07:44.138 START TEST filesystem_xfs 00:07:44.138 ************************************ 00:07:44.138 16:18:53 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:44.138 16:18:53 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:44.138 16:18:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.138 16:18:53 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:44.138 16:18:53 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:44.138 16:18:53 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:44.138 16:18:53 -- common/autotest_common.sh@914 -- # local i=0 00:07:44.138 16:18:53 -- common/autotest_common.sh@915 -- # local force 00:07:44.138 16:18:53 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:44.138 16:18:53 -- common/autotest_common.sh@920 -- # force=-f 00:07:44.138 16:18:53 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:44.398 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:44.398 = sectsz=512 attr=2, projid32bit=1 00:07:44.398 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:44.398 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:44.398 data = bsize=4096 blocks=130560, imaxpct=25 00:07:44.398 = sunit=0 swidth=0 blks 00:07:44.398 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:44.398 log =internal log bsize=4096 blocks=16384, version=2 00:07:44.398 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:44.398 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:44.398 Discarding blocks...Done. 00:07:44.398 16:18:53 -- common/autotest_common.sh@931 -- # return 0 00:07:44.398 16:18:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:44.965 16:18:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:44.965 16:18:53 -- target/filesystem.sh@25 -- # sync 00:07:44.965 16:18:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:44.965 16:18:53 -- target/filesystem.sh@27 -- # sync 00:07:44.965 16:18:53 -- target/filesystem.sh@29 -- # i=0 00:07:44.966 16:18:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:44.966 16:18:53 -- target/filesystem.sh@37 -- # kill -0 357012 00:07:44.966 16:18:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:44.966 16:18:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:44.966 16:18:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:44.966 16:18:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:44.966 00:07:44.966 real 0m0.675s 00:07:44.966 user 0m0.024s 00:07:44.966 sys 0m0.110s 00:07:44.966 16:18:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.966 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:07:44.966 ************************************ 00:07:44.966 END TEST filesystem_xfs 00:07:44.966 ************************************ 00:07:44.966 16:18:53 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:44.966 16:18:53 -- target/filesystem.sh@93 -- # sync 00:07:44.966 16:18:53 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.249 16:18:57 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.249 16:18:57 -- common/autotest_common.sh@1205 -- # local i=0 00:07:48.249 16:18:57 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:48.249 16:18:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.249 16:18:57 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:48.249 16:18:57 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.249 16:18:57 -- common/autotest_common.sh@1217 -- # return 0 00:07:48.249 16:18:57 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.249 16:18:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.249 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.249 16:18:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.249 16:18:57 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:48.249 16:18:57 -- target/filesystem.sh@101 -- # killprocess 357012 00:07:48.249 16:18:57 -- common/autotest_common.sh@936 -- # '[' -z 357012 ']' 00:07:48.249 16:18:57 -- common/autotest_common.sh@940 -- # kill -0 357012 00:07:48.249 16:18:57 -- common/autotest_common.sh@941 -- # uname 00:07:48.249 16:18:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.249 16:18:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 357012 00:07:48.249 16:18:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:48.249 16:18:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:48.249 16:18:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 357012' 00:07:48.249 killing process with pid 357012 00:07:48.249 16:18:57 -- common/autotest_common.sh@955 -- # kill 357012 00:07:48.249 16:18:57 -- common/autotest_common.sh@960 -- # wait 357012 00:07:48.249 [2024-04-26 16:18:57.232070] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:07:48.817 16:18:57 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:48.817 00:07:48.817 real 0m11.772s 00:07:48.817 user 0m46.241s 00:07:48.817 sys 0m1.643s 00:07:48.817 16:18:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.817 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.817 ************************************ 00:07:48.817 END TEST nvmf_filesystem_no_in_capsule 00:07:48.817 ************************************ 00:07:48.817 16:18:57 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:48.817 16:18:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:48.817 16:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.817 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.817 ************************************ 00:07:48.817 START TEST nvmf_filesystem_in_capsule 00:07:48.817 ************************************ 00:07:48.817 16:18:57 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:48.817 16:18:57 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:48.817 16:18:57 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:48.817 16:18:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:48.817 16:18:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:48.817 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.817 16:18:57 -- nvmf/common.sh@470 -- # nvmfpid=358730 00:07:48.817 16:18:57 -- nvmf/common.sh@471 -- # waitforlisten 358730 00:07:48.817 16:18:57 -- common/autotest_common.sh@817 -- # '[' -z 358730 ']' 00:07:48.817 16:18:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.817 16:18:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:48.817 16:18:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.817 16:18:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:48.817 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:07:48.817 16:18:57 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:49.076 [2024-04-26 16:18:57.868618] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:07:49.077 [2024-04-26 16:18:57.868675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.077 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.077 [2024-04-26 16:18:57.937813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.077 [2024-04-26 16:18:58.019784] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.077 [2024-04-26 16:18:58.019826] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.077 [2024-04-26 16:18:58.019835] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.077 [2024-04-26 16:18:58.019859] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.077 [2024-04-26 16:18:58.019866] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.077 [2024-04-26 16:18:58.019971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.077 [2024-04-26 16:18:58.019995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.077 [2024-04-26 16:18:58.020088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.077 [2024-04-26 16:18:58.020089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.663 16:18:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:49.663 16:18:58 -- common/autotest_common.sh@850 -- # return 0 00:07:49.663 16:18:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:49.663 16:18:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:49.663 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:49.934 16:18:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.934 16:18:58 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:49.934 16:18:58 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:07:49.934 16:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.934 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:49.934 [2024-04-26 16:18:58.751332] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24f2310/0x24f6800) succeed. 00:07:49.934 [2024-04-26 16:18:58.761544] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24f3950/0x2537e90) succeed. 00:07:49.934 16:18:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:49.934 16:18:58 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:49.934 16:18:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:49.934 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:07:50.194 Malloc1 00:07:50.194 16:18:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.194 16:18:59 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:50.194 16:18:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.194 16:18:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.194 16:18:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.194 16:18:59 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:50.194 16:18:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.194 16:18:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.194 16:18:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.194 16:18:59 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:50.194 16:18:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.194 16:18:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.194 [2024-04-26 16:18:59.050750] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:50.194 16:18:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.194 16:18:59 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:50.194 16:18:59 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:50.194 16:18:59 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:50.194 16:18:59 -- common/autotest_common.sh@1366 -- # local bs 00:07:50.194 16:18:59 -- common/autotest_common.sh@1367 -- # local nb 00:07:50.194 16:18:59 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:50.194 16:18:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:50.194 16:18:59 -- common/autotest_common.sh@10 -- # set +x 00:07:50.194 16:18:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:50.194 16:18:59 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:50.194 { 00:07:50.194 "name": "Malloc1", 00:07:50.194 "aliases": [ 00:07:50.194 "0f4952bd-e823-4170-9a6a-ad140fb85711" 00:07:50.194 ], 00:07:50.194 "product_name": "Malloc disk", 00:07:50.194 "block_size": 512, 00:07:50.194 "num_blocks": 1048576, 00:07:50.194 "uuid": "0f4952bd-e823-4170-9a6a-ad140fb85711", 00:07:50.194 "assigned_rate_limits": { 00:07:50.194 "rw_ios_per_sec": 0, 00:07:50.194 "rw_mbytes_per_sec": 0, 00:07:50.194 "r_mbytes_per_sec": 0, 00:07:50.194 "w_mbytes_per_sec": 0 00:07:50.194 }, 00:07:50.194 "claimed": true, 00:07:50.194 "claim_type": "exclusive_write", 00:07:50.194 "zoned": false, 00:07:50.194 "supported_io_types": { 00:07:50.194 "read": true, 00:07:50.194 "write": true, 00:07:50.194 "unmap": true, 00:07:50.194 "write_zeroes": true, 00:07:50.194 "flush": true, 00:07:50.194 "reset": true, 00:07:50.194 "compare": false, 00:07:50.194 "compare_and_write": false, 00:07:50.194 "abort": true, 00:07:50.194 "nvme_admin": false, 00:07:50.194 "nvme_io": false 00:07:50.194 }, 00:07:50.194 "memory_domains": [ 00:07:50.194 { 00:07:50.194 "dma_device_id": "system", 00:07:50.194 "dma_device_type": 1 00:07:50.194 }, 00:07:50.194 { 00:07:50.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.194 "dma_device_type": 2 00:07:50.194 } 00:07:50.194 ], 00:07:50.194 "driver_specific": {} 00:07:50.194 } 00:07:50.194 ]' 00:07:50.194 16:18:59 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:50.194 16:18:59 -- common/autotest_common.sh@1369 -- # bs=512 00:07:50.194 16:18:59 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:50.194 16:18:59 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:50.194 16:18:59 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:50.194 16:18:59 -- common/autotest_common.sh@1374 -- # echo 512 00:07:50.194 16:18:59 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:50.194 16:18:59 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:52.093 16:19:00 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:52.093 16:19:00 -- common/autotest_common.sh@1184 -- # local i=0 00:07:52.093 16:19:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:52.093 16:19:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:52.093 16:19:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:53.993 16:19:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:53.993 16:19:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:53.993 16:19:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:53.993 16:19:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:53.993 16:19:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:53.993 16:19:02 -- common/autotest_common.sh@1194 -- # return 0 00:07:53.993 16:19:02 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:53.993 16:19:02 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:53.993 16:19:02 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:53.993 16:19:02 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:53.993 16:19:02 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:53.993 16:19:02 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:53.993 16:19:02 -- setup/common.sh@80 -- # echo 536870912 00:07:53.993 16:19:02 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:53.993 16:19:02 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:53.993 16:19:02 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:53.993 16:19:02 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:53.993 16:19:02 -- target/filesystem.sh@69 -- # partprobe 00:07:53.993 16:19:02 -- target/filesystem.sh@70 -- # sleep 1 00:07:54.926 16:19:03 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:54.926 16:19:03 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:54.926 16:19:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:54.926 16:19:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.926 16:19:03 -- common/autotest_common.sh@10 -- # set +x 00:07:55.184 ************************************ 00:07:55.184 START TEST filesystem_in_capsule_ext4 00:07:55.184 ************************************ 00:07:55.184 16:19:04 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:55.184 16:19:04 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:55.184 16:19:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.184 16:19:04 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:55.184 16:19:04 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:55.184 16:19:04 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:55.184 16:19:04 -- common/autotest_common.sh@914 -- # local i=0 00:07:55.184 16:19:04 -- common/autotest_common.sh@915 -- # local force 00:07:55.184 16:19:04 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:55.184 16:19:04 -- common/autotest_common.sh@918 -- # force=-F 00:07:55.184 16:19:04 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:55.184 mke2fs 1.46.5 (30-Dec-2021) 00:07:55.184 Discarding device blocks: 0/522240 done 00:07:55.184 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:55.184 Filesystem UUID: 757add6d-581b-4e6d-9373-d612ed0c1156 00:07:55.184 Superblock backups stored on blocks: 00:07:55.184 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:55.184 00:07:55.184 Allocating group tables: 0/64 done 00:07:55.184 Writing inode tables: 0/64 done 00:07:55.184 Creating journal (8192 blocks): done 00:07:55.184 Writing superblocks and filesystem accounting information: 0/64 done 00:07:55.184 00:07:55.184 16:19:04 -- common/autotest_common.sh@931 -- # return 0 00:07:55.184 16:19:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.441 16:19:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.441 16:19:04 -- target/filesystem.sh@25 -- # sync 00:07:55.441 16:19:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.441 16:19:04 -- target/filesystem.sh@27 -- # sync 00:07:55.441 16:19:04 -- target/filesystem.sh@29 -- # i=0 00:07:55.441 16:19:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.441 16:19:04 -- target/filesystem.sh@37 -- # kill -0 358730 00:07:55.441 16:19:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.441 16:19:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.441 16:19:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.441 16:19:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.441 00:07:55.441 real 0m0.184s 00:07:55.441 user 0m0.029s 00:07:55.441 sys 0m0.069s 00:07:55.441 16:19:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.441 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:07:55.441 ************************************ 00:07:55.441 END TEST filesystem_in_capsule_ext4 00:07:55.441 ************************************ 00:07:55.441 16:19:04 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:55.441 16:19:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:55.441 16:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.441 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:07:55.699 ************************************ 00:07:55.699 START TEST filesystem_in_capsule_btrfs 00:07:55.699 ************************************ 00:07:55.699 16:19:04 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:55.699 16:19:04 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:55.699 16:19:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:55.699 16:19:04 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:55.699 16:19:04 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:55.699 16:19:04 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:55.699 16:19:04 -- common/autotest_common.sh@914 -- # local i=0 00:07:55.699 16:19:04 -- common/autotest_common.sh@915 -- # local force 00:07:55.699 16:19:04 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:55.699 16:19:04 -- common/autotest_common.sh@920 -- # force=-f 00:07:55.699 16:19:04 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:55.699 btrfs-progs v6.6.2 00:07:55.699 See https://btrfs.readthedocs.io for more information. 00:07:55.699 00:07:55.699 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:55.699 NOTE: several default settings have changed in version 5.15, please make sure 00:07:55.699 this does not affect your deployments: 00:07:55.699 - DUP for metadata (-m dup) 00:07:55.699 - enabled no-holes (-O no-holes) 00:07:55.699 - enabled free-space-tree (-R free-space-tree) 00:07:55.699 00:07:55.699 Label: (null) 00:07:55.699 UUID: 1ef89aa3-1380-4d03-8107-bb0cd7c7b168 00:07:55.699 Node size: 16384 00:07:55.699 Sector size: 4096 00:07:55.699 Filesystem size: 510.00MiB 00:07:55.699 Block group profiles: 00:07:55.699 Data: single 8.00MiB 00:07:55.699 Metadata: DUP 32.00MiB 00:07:55.699 System: DUP 8.00MiB 00:07:55.699 SSD detected: yes 00:07:55.699 Zoned device: no 00:07:55.699 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:55.699 Runtime features: free-space-tree 00:07:55.699 Checksum: crc32c 00:07:55.699 Number of devices: 1 00:07:55.699 Devices: 00:07:55.699 ID SIZE PATH 00:07:55.699 1 510.00MiB /dev/nvme0n1p1 00:07:55.699 00:07:55.699 16:19:04 -- common/autotest_common.sh@931 -- # return 0 00:07:55.699 16:19:04 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.699 16:19:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.699 16:19:04 -- target/filesystem.sh@25 -- # sync 00:07:55.699 16:19:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.957 16:19:04 -- target/filesystem.sh@27 -- # sync 00:07:55.957 16:19:04 -- target/filesystem.sh@29 -- # i=0 00:07:55.957 16:19:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.957 16:19:04 -- target/filesystem.sh@37 -- # kill -0 358730 00:07:55.957 16:19:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.957 16:19:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.957 16:19:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.957 16:19:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.957 00:07:55.957 real 0m0.279s 00:07:55.957 user 0m0.025s 00:07:55.957 sys 0m0.138s 00:07:55.957 16:19:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.957 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:07:55.957 ************************************ 00:07:55.957 END TEST filesystem_in_capsule_btrfs 00:07:55.957 ************************************ 00:07:55.957 16:19:04 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:55.957 16:19:04 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:55.957 16:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.957 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:07:56.215 ************************************ 00:07:56.215 START TEST filesystem_in_capsule_xfs 00:07:56.215 ************************************ 00:07:56.215 16:19:04 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:56.215 16:19:04 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:56.215 16:19:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.215 16:19:04 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:56.215 16:19:04 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:56.215 16:19:04 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:56.215 16:19:04 -- common/autotest_common.sh@914 -- # local i=0 00:07:56.215 16:19:04 -- common/autotest_common.sh@915 -- # local force 00:07:56.215 16:19:04 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:56.215 16:19:04 -- common/autotest_common.sh@920 -- # force=-f 00:07:56.215 16:19:04 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:56.215 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:56.215 = sectsz=512 attr=2, projid32bit=1 00:07:56.215 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:56.215 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:56.215 data = bsize=4096 blocks=130560, imaxpct=25 00:07:56.215 = sunit=0 swidth=0 blks 00:07:56.215 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:56.215 log =internal log bsize=4096 blocks=16384, version=2 00:07:56.215 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:56.215 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:56.215 Discarding blocks...Done. 00:07:56.215 16:19:05 -- common/autotest_common.sh@931 -- # return 0 00:07:56.215 16:19:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.215 16:19:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.215 16:19:05 -- target/filesystem.sh@25 -- # sync 00:07:56.215 16:19:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.215 16:19:05 -- target/filesystem.sh@27 -- # sync 00:07:56.215 16:19:05 -- target/filesystem.sh@29 -- # i=0 00:07:56.215 16:19:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.215 16:19:05 -- target/filesystem.sh@37 -- # kill -0 358730 00:07:56.215 16:19:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.215 16:19:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.215 16:19:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.215 16:19:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.215 00:07:56.215 real 0m0.211s 00:07:56.215 user 0m0.029s 00:07:56.215 sys 0m0.077s 00:07:56.215 16:19:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:56.215 16:19:05 -- common/autotest_common.sh@10 -- # set +x 00:07:56.215 ************************************ 00:07:56.215 END TEST filesystem_in_capsule_xfs 00:07:56.215 ************************************ 00:07:56.473 16:19:05 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:56.473 16:19:05 -- target/filesystem.sh@93 -- # sync 00:07:56.473 16:19:05 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.757 16:19:08 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.757 16:19:08 -- common/autotest_common.sh@1205 -- # local i=0 00:07:59.757 16:19:08 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:59.757 16:19:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.757 16:19:08 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:59.757 16:19:08 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.757 16:19:08 -- common/autotest_common.sh@1217 -- # return 0 00:07:59.757 16:19:08 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.757 16:19:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.757 16:19:08 -- common/autotest_common.sh@10 -- # set +x 00:07:59.757 16:19:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.757 16:19:08 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:59.757 16:19:08 -- target/filesystem.sh@101 -- # killprocess 358730 00:07:59.757 16:19:08 -- common/autotest_common.sh@936 -- # '[' -z 358730 ']' 00:07:59.757 16:19:08 -- common/autotest_common.sh@940 -- # kill -0 358730 00:07:59.757 16:19:08 -- common/autotest_common.sh@941 -- # uname 00:07:59.757 16:19:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.757 16:19:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 358730 00:07:59.757 16:19:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.757 16:19:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.757 16:19:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 358730' 00:07:59.757 killing process with pid 358730 00:07:59.757 16:19:08 -- common/autotest_common.sh@955 -- # kill 358730 00:07:59.757 16:19:08 -- common/autotest_common.sh@960 -- # wait 358730 00:07:59.757 [2024-04-26 16:19:08.736858] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:00.326 16:19:09 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:00.326 00:08:00.326 real 0m11.322s 00:08:00.326 user 0m44.382s 00:08:00.326 sys 0m1.600s 00:08:00.326 16:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.326 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:08:00.326 ************************************ 00:08:00.326 END TEST nvmf_filesystem_in_capsule 00:08:00.326 ************************************ 00:08:00.326 16:19:09 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:00.326 16:19:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:00.326 16:19:09 -- nvmf/common.sh@117 -- # sync 00:08:00.326 16:19:09 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:00.326 16:19:09 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:00.326 16:19:09 -- nvmf/common.sh@120 -- # set +e 00:08:00.326 16:19:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:00.326 16:19:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:00.326 rmmod nvme_rdma 00:08:00.326 rmmod nvme_fabrics 00:08:00.326 16:19:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:00.327 16:19:09 -- nvmf/common.sh@124 -- # set -e 00:08:00.327 16:19:09 -- nvmf/common.sh@125 -- # return 0 00:08:00.327 16:19:09 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:08:00.327 16:19:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:00.327 16:19:09 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:00.327 00:08:00.327 real 0m29.709s 00:08:00.327 user 1m32.642s 00:08:00.327 sys 0m8.027s 00:08:00.327 16:19:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.327 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:08:00.327 ************************************ 00:08:00.327 END TEST nvmf_filesystem 00:08:00.327 ************************************ 00:08:00.327 16:19:09 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:00.327 16:19:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:00.327 16:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.327 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:08:00.587 ************************************ 00:08:00.587 START TEST nvmf_discovery 00:08:00.587 ************************************ 00:08:00.587 16:19:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:00.587 * Looking for test storage... 00:08:00.587 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:00.587 16:19:09 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.587 16:19:09 -- nvmf/common.sh@7 -- # uname -s 00:08:00.587 16:19:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.587 16:19:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.587 16:19:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.587 16:19:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.587 16:19:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.587 16:19:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.587 16:19:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.587 16:19:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.587 16:19:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.587 16:19:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.587 16:19:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:08:00.587 16:19:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:08:00.587 16:19:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.587 16:19:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.587 16:19:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.587 16:19:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.587 16:19:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:00.587 16:19:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.587 16:19:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.587 16:19:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.587 16:19:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.587 16:19:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.587 16:19:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.587 16:19:09 -- paths/export.sh@5 -- # export PATH 00:08:00.588 16:19:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.588 16:19:09 -- nvmf/common.sh@47 -- # : 0 00:08:00.588 16:19:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.588 16:19:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.588 16:19:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.588 16:19:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.588 16:19:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.588 16:19:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.588 16:19:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.588 16:19:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.588 16:19:09 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:00.588 16:19:09 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:00.588 16:19:09 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:00.588 16:19:09 -- target/discovery.sh@15 -- # hash nvme 00:08:00.588 16:19:09 -- target/discovery.sh@20 -- # nvmftestinit 00:08:00.588 16:19:09 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:00.588 16:19:09 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.588 16:19:09 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:00.588 16:19:09 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:00.588 16:19:09 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:00.588 16:19:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.588 16:19:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:00.588 16:19:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.588 16:19:09 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:00.588 16:19:09 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:00.588 16:19:09 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.588 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:08:07.153 16:19:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:07.153 16:19:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.153 16:19:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.153 16:19:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.153 16:19:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.153 16:19:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.153 16:19:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.153 16:19:15 -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.153 16:19:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.153 16:19:15 -- nvmf/common.sh@296 -- # e810=() 00:08:07.153 16:19:15 -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.153 16:19:15 -- nvmf/common.sh@297 -- # x722=() 00:08:07.153 16:19:15 -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.153 16:19:15 -- nvmf/common.sh@298 -- # mlx=() 00:08:07.153 16:19:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.153 16:19:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.153 16:19:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.153 16:19:15 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:07.153 16:19:15 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:07.153 16:19:15 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:07.153 16:19:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.153 16:19:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.153 16:19:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:08:07.153 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:08:07.153 16:19:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:07.153 16:19:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.153 16:19:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:08:07.153 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:08:07.153 16:19:15 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:07.153 16:19:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.153 16:19:15 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:07.153 16:19:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.153 16:19:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.153 16:19:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:07.153 16:19:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.153 16:19:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:07.154 Found net devices under 0000:18:00.0: mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.154 16:19:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.154 16:19:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:07.154 16:19:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.154 16:19:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:07.154 Found net devices under 0000:18:00.1: mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.154 16:19:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:07.154 16:19:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:07.154 16:19:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:07.154 16:19:15 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:07.154 16:19:15 -- nvmf/common.sh@58 -- # uname 00:08:07.154 16:19:15 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:07.154 16:19:15 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:07.154 16:19:15 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:07.154 16:19:15 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:07.154 16:19:15 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:07.154 16:19:15 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:07.154 16:19:15 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:07.154 16:19:15 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:07.154 16:19:15 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:07.154 16:19:15 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:07.154 16:19:15 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:07.154 16:19:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:07.154 16:19:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:07.154 16:19:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:07.154 16:19:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:07.154 16:19:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:07.154 16:19:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@105 -- # continue 2 00:08:07.154 16:19:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@105 -- # continue 2 00:08:07.154 16:19:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:07.154 16:19:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:07.154 16:19:15 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:07.154 16:19:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:07.154 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:07.154 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:08:07.154 altname enp24s0f0np0 00:08:07.154 altname ens785f0np0 00:08:07.154 inet 192.168.100.8/24 scope global mlx_0_0 00:08:07.154 valid_lft forever preferred_lft forever 00:08:07.154 16:19:15 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:07.154 16:19:15 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:07.154 16:19:15 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:07.154 16:19:15 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:07.154 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:07.154 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:08:07.154 altname enp24s0f1np1 00:08:07.154 altname ens785f1np1 00:08:07.154 inet 192.168.100.9/24 scope global mlx_0_1 00:08:07.154 valid_lft forever preferred_lft forever 00:08:07.154 16:19:15 -- nvmf/common.sh@411 -- # return 0 00:08:07.154 16:19:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:07.154 16:19:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:07.154 16:19:15 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:07.154 16:19:15 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:07.154 16:19:15 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:07.154 16:19:15 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:07.154 16:19:15 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:07.154 16:19:15 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:07.154 16:19:15 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:07.154 16:19:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@105 -- # continue 2 00:08:07.154 16:19:15 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.154 16:19:15 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:07.154 16:19:15 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@105 -- # continue 2 00:08:07.154 16:19:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:07.154 16:19:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:07.154 16:19:15 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:07.154 16:19:15 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:07.154 16:19:15 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:07.154 16:19:15 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:07.154 192.168.100.9' 00:08:07.154 16:19:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:07.154 192.168.100.9' 00:08:07.154 16:19:15 -- nvmf/common.sh@446 -- # head -n 1 00:08:07.154 16:19:15 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:07.154 16:19:15 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:07.154 192.168.100.9' 00:08:07.154 16:19:15 -- nvmf/common.sh@447 -- # tail -n +2 00:08:07.154 16:19:15 -- nvmf/common.sh@447 -- # head -n 1 00:08:07.154 16:19:15 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:07.154 16:19:15 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:07.154 16:19:15 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:07.154 16:19:15 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:07.154 16:19:15 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:07.154 16:19:15 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:07.154 16:19:15 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:07.154 16:19:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:07.154 16:19:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:07.154 16:19:15 -- common/autotest_common.sh@10 -- # set +x 00:08:07.154 16:19:15 -- nvmf/common.sh@470 -- # nvmfpid=363349 00:08:07.154 16:19:15 -- nvmf/common.sh@471 -- # waitforlisten 363349 00:08:07.154 16:19:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.154 16:19:15 -- common/autotest_common.sh@817 -- # '[' -z 363349 ']' 00:08:07.154 16:19:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.154 16:19:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:07.154 16:19:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.154 16:19:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:07.154 16:19:15 -- common/autotest_common.sh@10 -- # set +x 00:08:07.154 [2024-04-26 16:19:15.355559] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:08:07.154 [2024-04-26 16:19:15.355615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.154 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.154 [2024-04-26 16:19:15.428659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.154 [2024-04-26 16:19:15.516693] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.154 [2024-04-26 16:19:15.516758] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.154 [2024-04-26 16:19:15.516772] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.154 [2024-04-26 16:19:15.516781] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.154 [2024-04-26 16:19:15.516788] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.154 [2024-04-26 16:19:15.516848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.154 [2024-04-26 16:19:15.517140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.155 [2024-04-26 16:19:15.517217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.155 [2024-04-26 16:19:15.517219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.155 16:19:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:07.155 16:19:16 -- common/autotest_common.sh@850 -- # return 0 00:08:07.155 16:19:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:07.155 16:19:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:07.155 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 16:19:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.414 16:19:16 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:07.414 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.414 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 [2024-04-26 16:19:16.245540] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11b7310/0x11bb800) succeed. 00:08:07.414 [2024-04-26 16:19:16.255785] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11b8950/0x11fce90) succeed. 00:08:07.414 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.414 16:19:16 -- target/discovery.sh@26 -- # seq 1 4 00:08:07.414 16:19:16 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.414 16:19:16 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:07.414 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.414 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 Null1 00:08:07.414 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.414 16:19:16 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.414 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.414 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.414 16:19:16 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:07.414 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.414 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.414 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.414 16:19:16 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:07.414 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.673 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 [2024-04-26 16:19:16.441961] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:07.673 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.673 16:19:16 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.673 16:19:16 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:07.673 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.673 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 Null2 00:08:07.673 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.673 16:19:16 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:07.673 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.673 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.673 16:19:16 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:07.673 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.673 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.673 16:19:16 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:07.673 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.673 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.673 16:19:16 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.673 16:19:16 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:07.673 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.673 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 Null3 00:08:07.673 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.673 16:19:16 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:07.673 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.673 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.673 16:19:16 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:07.673 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.673 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.673 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.674 16:19:16 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:07.674 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.674 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.674 16:19:16 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:07.674 16:19:16 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:07.674 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.674 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 Null4 00:08:07.674 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.674 16:19:16 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:07.674 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.674 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.674 16:19:16 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:07.674 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.674 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.674 16:19:16 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:07.674 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.674 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.674 16:19:16 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:07.674 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.674 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.674 16:19:16 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:07.674 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.674 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.674 16:19:16 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 4420 00:08:07.674 00:08:07.674 Discovery Log Number of Records 6, Generation counter 6 00:08:07.674 =====Discovery Log Entry 0====== 00:08:07.674 trtype: rdma 00:08:07.674 adrfam: ipv4 00:08:07.674 subtype: current discovery subsystem 00:08:07.674 treq: not required 00:08:07.674 portid: 0 00:08:07.674 trsvcid: 4420 00:08:07.674 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.674 traddr: 192.168.100.8 00:08:07.674 eflags: explicit discovery connections, duplicate discovery information 00:08:07.674 rdma_prtype: not specified 00:08:07.674 rdma_qptype: connected 00:08:07.674 rdma_cms: rdma-cm 00:08:07.674 rdma_pkey: 0x0000 00:08:07.674 =====Discovery Log Entry 1====== 00:08:07.674 trtype: rdma 00:08:07.674 adrfam: ipv4 00:08:07.674 subtype: nvme subsystem 00:08:07.674 treq: not required 00:08:07.674 portid: 0 00:08:07.674 trsvcid: 4420 00:08:07.674 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:07.674 traddr: 192.168.100.8 00:08:07.674 eflags: none 00:08:07.674 rdma_prtype: not specified 00:08:07.674 rdma_qptype: connected 00:08:07.674 rdma_cms: rdma-cm 00:08:07.674 rdma_pkey: 0x0000 00:08:07.674 =====Discovery Log Entry 2====== 00:08:07.674 trtype: rdma 00:08:07.674 adrfam: ipv4 00:08:07.674 subtype: nvme subsystem 00:08:07.674 treq: not required 00:08:07.674 portid: 0 00:08:07.674 trsvcid: 4420 00:08:07.674 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:07.674 traddr: 192.168.100.8 00:08:07.674 eflags: none 00:08:07.674 rdma_prtype: not specified 00:08:07.674 rdma_qptype: connected 00:08:07.674 rdma_cms: rdma-cm 00:08:07.674 rdma_pkey: 0x0000 00:08:07.674 =====Discovery Log Entry 3====== 00:08:07.674 trtype: rdma 00:08:07.674 adrfam: ipv4 00:08:07.674 subtype: nvme subsystem 00:08:07.674 treq: not required 00:08:07.674 portid: 0 00:08:07.674 trsvcid: 4420 00:08:07.674 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:07.674 traddr: 192.168.100.8 00:08:07.674 eflags: none 00:08:07.674 rdma_prtype: not specified 00:08:07.674 rdma_qptype: connected 00:08:07.674 rdma_cms: rdma-cm 00:08:07.674 rdma_pkey: 0x0000 00:08:07.674 =====Discovery Log Entry 4====== 00:08:07.674 trtype: rdma 00:08:07.674 adrfam: ipv4 00:08:07.674 subtype: nvme subsystem 00:08:07.674 treq: not required 00:08:07.674 portid: 0 00:08:07.674 trsvcid: 4420 00:08:07.674 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:07.674 traddr: 192.168.100.8 00:08:07.674 eflags: none 00:08:07.674 rdma_prtype: not specified 00:08:07.674 rdma_qptype: connected 00:08:07.674 rdma_cms: rdma-cm 00:08:07.674 rdma_pkey: 0x0000 00:08:07.674 =====Discovery Log Entry 5====== 00:08:07.674 trtype: rdma 00:08:07.674 adrfam: ipv4 00:08:07.674 subtype: discovery subsystem referral 00:08:07.674 treq: not required 00:08:07.674 portid: 0 00:08:07.674 trsvcid: 4430 00:08:07.674 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:07.674 traddr: 192.168.100.8 00:08:07.674 eflags: none 00:08:07.674 rdma_prtype: unrecognized 00:08:07.674 rdma_qptype: unrecognized 00:08:07.674 rdma_cms: unrecognized 00:08:07.674 rdma_pkey: 0x0000 00:08:07.674 16:19:16 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:07.674 Perform nvmf subsystem discovery via RPC 00:08:07.674 16:19:16 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:07.674 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.674 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.674 [2024-04-26 16:19:16.642524] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:07.674 [ 00:08:07.674 { 00:08:07.674 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:07.674 "subtype": "Discovery", 00:08:07.674 "listen_addresses": [ 00:08:07.674 { 00:08:07.674 "transport": "RDMA", 00:08:07.674 "trtype": "RDMA", 00:08:07.674 "adrfam": "IPv4", 00:08:07.674 "traddr": "192.168.100.8", 00:08:07.674 "trsvcid": "4420" 00:08:07.674 } 00:08:07.674 ], 00:08:07.674 "allow_any_host": true, 00:08:07.674 "hosts": [] 00:08:07.674 }, 00:08:07.674 { 00:08:07.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.674 "subtype": "NVMe", 00:08:07.674 "listen_addresses": [ 00:08:07.674 { 00:08:07.674 "transport": "RDMA", 00:08:07.674 "trtype": "RDMA", 00:08:07.674 "adrfam": "IPv4", 00:08:07.674 "traddr": "192.168.100.8", 00:08:07.674 "trsvcid": "4420" 00:08:07.674 } 00:08:07.674 ], 00:08:07.674 "allow_any_host": true, 00:08:07.674 "hosts": [], 00:08:07.674 "serial_number": "SPDK00000000000001", 00:08:07.674 "model_number": "SPDK bdev Controller", 00:08:07.674 "max_namespaces": 32, 00:08:07.674 "min_cntlid": 1, 00:08:07.674 "max_cntlid": 65519, 00:08:07.674 "namespaces": [ 00:08:07.674 { 00:08:07.674 "nsid": 1, 00:08:07.674 "bdev_name": "Null1", 00:08:07.674 "name": "Null1", 00:08:07.674 "nguid": "776B45FB88F24A6D9BDBDB53225F366A", 00:08:07.674 "uuid": "776b45fb-88f2-4a6d-9bdb-db53225f366a" 00:08:07.674 } 00:08:07.674 ] 00:08:07.674 }, 00:08:07.674 { 00:08:07.674 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:07.674 "subtype": "NVMe", 00:08:07.674 "listen_addresses": [ 00:08:07.674 { 00:08:07.674 "transport": "RDMA", 00:08:07.674 "trtype": "RDMA", 00:08:07.674 "adrfam": "IPv4", 00:08:07.674 "traddr": "192.168.100.8", 00:08:07.674 "trsvcid": "4420" 00:08:07.674 } 00:08:07.674 ], 00:08:07.674 "allow_any_host": true, 00:08:07.674 "hosts": [], 00:08:07.674 "serial_number": "SPDK00000000000002", 00:08:07.674 "model_number": "SPDK bdev Controller", 00:08:07.674 "max_namespaces": 32, 00:08:07.674 "min_cntlid": 1, 00:08:07.674 "max_cntlid": 65519, 00:08:07.674 "namespaces": [ 00:08:07.674 { 00:08:07.674 "nsid": 1, 00:08:07.674 "bdev_name": "Null2", 00:08:07.674 "name": "Null2", 00:08:07.674 "nguid": "C412E159D9B04727BCA67FE05C147F75", 00:08:07.674 "uuid": "c412e159-d9b0-4727-bca6-7fe05c147f75" 00:08:07.674 } 00:08:07.674 ] 00:08:07.674 }, 00:08:07.674 { 00:08:07.674 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:07.674 "subtype": "NVMe", 00:08:07.674 "listen_addresses": [ 00:08:07.674 { 00:08:07.674 "transport": "RDMA", 00:08:07.674 "trtype": "RDMA", 00:08:07.674 "adrfam": "IPv4", 00:08:07.674 "traddr": "192.168.100.8", 00:08:07.674 "trsvcid": "4420" 00:08:07.674 } 00:08:07.674 ], 00:08:07.674 "allow_any_host": true, 00:08:07.675 "hosts": [], 00:08:07.675 "serial_number": "SPDK00000000000003", 00:08:07.675 "model_number": "SPDK bdev Controller", 00:08:07.675 "max_namespaces": 32, 00:08:07.675 "min_cntlid": 1, 00:08:07.675 "max_cntlid": 65519, 00:08:07.675 "namespaces": [ 00:08:07.675 { 00:08:07.675 "nsid": 1, 00:08:07.675 "bdev_name": "Null3", 00:08:07.675 "name": "Null3", 00:08:07.675 "nguid": "A6B310D3833340A5866CD5E328BC2339", 00:08:07.675 "uuid": "a6b310d3-8333-40a5-866c-d5e328bc2339" 00:08:07.675 } 00:08:07.675 ] 00:08:07.675 }, 00:08:07.675 { 00:08:07.675 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:07.675 "subtype": "NVMe", 00:08:07.675 "listen_addresses": [ 00:08:07.675 { 00:08:07.675 "transport": "RDMA", 00:08:07.675 "trtype": "RDMA", 00:08:07.675 "adrfam": "IPv4", 00:08:07.675 "traddr": "192.168.100.8", 00:08:07.675 "trsvcid": "4420" 00:08:07.675 } 00:08:07.675 ], 00:08:07.675 "allow_any_host": true, 00:08:07.675 "hosts": [], 00:08:07.675 "serial_number": "SPDK00000000000004", 00:08:07.675 "model_number": "SPDK bdev Controller", 00:08:07.675 "max_namespaces": 32, 00:08:07.675 "min_cntlid": 1, 00:08:07.675 "max_cntlid": 65519, 00:08:07.675 "namespaces": [ 00:08:07.675 { 00:08:07.675 "nsid": 1, 00:08:07.675 "bdev_name": "Null4", 00:08:07.675 "name": "Null4", 00:08:07.675 "nguid": "8F8741E9FF1A4243A24097058A2EE295", 00:08:07.675 "uuid": "8f8741e9-ff1a-4243-a240-97058a2ee295" 00:08:07.675 } 00:08:07.675 ] 00:08:07.675 } 00:08:07.675 ] 00:08:07.675 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.675 16:19:16 -- target/discovery.sh@42 -- # seq 1 4 00:08:07.675 16:19:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.675 16:19:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:07.675 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.675 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.675 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.675 16:19:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:07.675 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.675 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.933 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.933 16:19:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.933 16:19:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:07.933 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.933 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.933 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.933 16:19:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:07.933 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.933 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.933 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.933 16:19:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.933 16:19:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:07.933 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.933 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.933 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.933 16:19:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:07.933 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.934 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.934 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.934 16:19:16 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:07.934 16:19:16 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:07.934 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.934 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.934 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.934 16:19:16 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:07.934 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.934 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.934 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.934 16:19:16 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:07.934 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.934 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.934 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.934 16:19:16 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:07.934 16:19:16 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:07.934 16:19:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:07.934 16:19:16 -- common/autotest_common.sh@10 -- # set +x 00:08:07.934 16:19:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:07.934 16:19:16 -- target/discovery.sh@49 -- # check_bdevs= 00:08:07.934 16:19:16 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:07.934 16:19:16 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:07.934 16:19:16 -- target/discovery.sh@57 -- # nvmftestfini 00:08:07.934 16:19:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:07.934 16:19:16 -- nvmf/common.sh@117 -- # sync 00:08:07.934 16:19:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:07.934 16:19:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:07.934 16:19:16 -- nvmf/common.sh@120 -- # set +e 00:08:07.934 16:19:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.934 16:19:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:07.934 rmmod nvme_rdma 00:08:07.934 rmmod nvme_fabrics 00:08:07.934 16:19:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.934 16:19:16 -- nvmf/common.sh@124 -- # set -e 00:08:07.934 16:19:16 -- nvmf/common.sh@125 -- # return 0 00:08:07.934 16:19:16 -- nvmf/common.sh@478 -- # '[' -n 363349 ']' 00:08:07.934 16:19:16 -- nvmf/common.sh@479 -- # killprocess 363349 00:08:07.934 16:19:16 -- common/autotest_common.sh@936 -- # '[' -z 363349 ']' 00:08:07.934 16:19:16 -- common/autotest_common.sh@940 -- # kill -0 363349 00:08:07.934 16:19:16 -- common/autotest_common.sh@941 -- # uname 00:08:07.934 16:19:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:07.934 16:19:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 363349 00:08:07.934 16:19:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:07.934 16:19:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:07.934 16:19:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 363349' 00:08:07.934 killing process with pid 363349 00:08:07.934 16:19:16 -- common/autotest_common.sh@955 -- # kill 363349 00:08:07.934 [2024-04-26 16:19:16.888497] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:07.934 16:19:16 -- common/autotest_common.sh@960 -- # wait 363349 00:08:08.192 [2024-04-26 16:19:16.974041] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:08.192 16:19:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:08.192 16:19:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:08.192 00:08:08.192 real 0m7.779s 00:08:08.192 user 0m8.312s 00:08:08.192 sys 0m4.777s 00:08:08.192 16:19:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.192 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.192 ************************************ 00:08:08.192 END TEST nvmf_discovery 00:08:08.192 ************************************ 00:08:08.451 16:19:17 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:08.451 16:19:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:08.451 16:19:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.451 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:08:08.451 ************************************ 00:08:08.451 START TEST nvmf_referrals 00:08:08.451 ************************************ 00:08:08.451 16:19:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:08.711 * Looking for test storage... 00:08:08.711 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:08.711 16:19:17 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:08.711 16:19:17 -- nvmf/common.sh@7 -- # uname -s 00:08:08.711 16:19:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:08.711 16:19:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:08.711 16:19:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:08.711 16:19:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:08.711 16:19:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:08.711 16:19:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:08.711 16:19:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:08.711 16:19:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:08.711 16:19:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:08.711 16:19:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:08.711 16:19:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:08:08.711 16:19:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:08:08.711 16:19:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:08.711 16:19:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:08.711 16:19:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:08.711 16:19:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:08.711 16:19:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:08.711 16:19:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:08.711 16:19:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:08.711 16:19:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:08.712 16:19:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.712 16:19:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.712 16:19:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.712 16:19:17 -- paths/export.sh@5 -- # export PATH 00:08:08.712 16:19:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:08.712 16:19:17 -- nvmf/common.sh@47 -- # : 0 00:08:08.712 16:19:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:08.712 16:19:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:08.712 16:19:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:08.712 16:19:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:08.712 16:19:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:08.712 16:19:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:08.712 16:19:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:08.712 16:19:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:08.712 16:19:17 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:08.712 16:19:17 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:08.712 16:19:17 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:08.712 16:19:17 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:08.712 16:19:17 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:08.712 16:19:17 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:08.712 16:19:17 -- target/referrals.sh@37 -- # nvmftestinit 00:08:08.712 16:19:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:08.712 16:19:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:08.712 16:19:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:08.712 16:19:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:08.712 16:19:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:08.712 16:19:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.712 16:19:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.712 16:19:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:08.712 16:19:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:08.712 16:19:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:08.712 16:19:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:08.712 16:19:17 -- common/autotest_common.sh@10 -- # set +x 00:08:15.285 16:19:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:15.285 16:19:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.285 16:19:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.285 16:19:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.285 16:19:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.285 16:19:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.285 16:19:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.285 16:19:23 -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.285 16:19:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.285 16:19:23 -- nvmf/common.sh@296 -- # e810=() 00:08:15.285 16:19:23 -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.285 16:19:23 -- nvmf/common.sh@297 -- # x722=() 00:08:15.285 16:19:23 -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.285 16:19:23 -- nvmf/common.sh@298 -- # mlx=() 00:08:15.285 16:19:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.285 16:19:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.285 16:19:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.285 16:19:23 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:15.285 16:19:23 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:15.285 16:19:23 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:15.285 16:19:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.285 16:19:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.285 16:19:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:08:15.285 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:08:15.285 16:19:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.285 16:19:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.285 16:19:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:08:15.285 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:08:15.285 16:19:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.285 16:19:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.285 16:19:23 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.285 16:19:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.285 16:19:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:15.285 16:19:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.285 16:19:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:15.285 Found net devices under 0000:18:00.0: mlx_0_0 00:08:15.285 16:19:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.285 16:19:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.285 16:19:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.285 16:19:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:15.285 16:19:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.285 16:19:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:15.285 Found net devices under 0000:18:00.1: mlx_0_1 00:08:15.285 16:19:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.285 16:19:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:15.285 16:19:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:15.285 16:19:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:15.285 16:19:23 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:15.285 16:19:23 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:15.285 16:19:23 -- nvmf/common.sh@58 -- # uname 00:08:15.285 16:19:23 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:15.285 16:19:23 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:15.285 16:19:23 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:15.285 16:19:23 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:15.285 16:19:23 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:15.285 16:19:23 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:15.285 16:19:23 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:15.285 16:19:23 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:15.285 16:19:23 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:15.285 16:19:23 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:15.285 16:19:23 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:15.286 16:19:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.286 16:19:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:15.286 16:19:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:15.286 16:19:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.286 16:19:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:15.286 16:19:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.286 16:19:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.286 16:19:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.286 16:19:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:15.286 16:19:23 -- nvmf/common.sh@105 -- # continue 2 00:08:15.286 16:19:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.286 16:19:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.286 16:19:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.286 16:19:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.286 16:19:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.286 16:19:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:15.286 16:19:23 -- nvmf/common.sh@105 -- # continue 2 00:08:15.286 16:19:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:15.286 16:19:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:15.286 16:19:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:15.286 16:19:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:15.286 16:19:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.286 16:19:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.286 16:19:23 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:15.286 16:19:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:15.286 16:19:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:15.286 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.286 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:08:15.286 altname enp24s0f0np0 00:08:15.286 altname ens785f0np0 00:08:15.286 inet 192.168.100.8/24 scope global mlx_0_0 00:08:15.286 valid_lft forever preferred_lft forever 00:08:15.286 16:19:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:15.286 16:19:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:15.286 16:19:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:15.286 16:19:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:15.286 16:19:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.286 16:19:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.286 16:19:23 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:15.286 16:19:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:15.286 16:19:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:15.286 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.286 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:08:15.286 altname enp24s0f1np1 00:08:15.286 altname ens785f1np1 00:08:15.286 inet 192.168.100.9/24 scope global mlx_0_1 00:08:15.286 valid_lft forever preferred_lft forever 00:08:15.286 16:19:23 -- nvmf/common.sh@411 -- # return 0 00:08:15.286 16:19:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:15.286 16:19:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:15.286 16:19:23 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:15.286 16:19:23 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:15.286 16:19:23 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:15.286 16:19:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.286 16:19:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:15.286 16:19:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:15.286 16:19:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.286 16:19:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:15.286 16:19:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.286 16:19:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.286 16:19:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.286 16:19:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:15.286 16:19:24 -- nvmf/common.sh@105 -- # continue 2 00:08:15.286 16:19:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:15.286 16:19:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.286 16:19:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.286 16:19:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.286 16:19:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.286 16:19:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:15.286 16:19:24 -- nvmf/common.sh@105 -- # continue 2 00:08:15.286 16:19:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:15.286 16:19:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:15.286 16:19:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:15.286 16:19:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:15.286 16:19:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.286 16:19:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.286 16:19:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:15.286 16:19:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:15.286 16:19:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:15.286 16:19:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:15.286 16:19:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:15.286 16:19:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:15.286 16:19:24 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:15.286 192.168.100.9' 00:08:15.286 16:19:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:15.286 192.168.100.9' 00:08:15.286 16:19:24 -- nvmf/common.sh@446 -- # head -n 1 00:08:15.286 16:19:24 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:15.286 16:19:24 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:15.286 192.168.100.9' 00:08:15.286 16:19:24 -- nvmf/common.sh@447 -- # tail -n +2 00:08:15.286 16:19:24 -- nvmf/common.sh@447 -- # head -n 1 00:08:15.286 16:19:24 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:15.286 16:19:24 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:15.286 16:19:24 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:15.286 16:19:24 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:15.286 16:19:24 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:15.286 16:19:24 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:15.286 16:19:24 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:15.286 16:19:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:15.286 16:19:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:15.286 16:19:24 -- common/autotest_common.sh@10 -- # set +x 00:08:15.286 16:19:24 -- nvmf/common.sh@470 -- # nvmfpid=366509 00:08:15.286 16:19:24 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.286 16:19:24 -- nvmf/common.sh@471 -- # waitforlisten 366509 00:08:15.286 16:19:24 -- common/autotest_common.sh@817 -- # '[' -z 366509 ']' 00:08:15.287 16:19:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.287 16:19:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:15.287 16:19:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.287 16:19:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:15.287 16:19:24 -- common/autotest_common.sh@10 -- # set +x 00:08:15.287 [2024-04-26 16:19:24.132861] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:08:15.287 [2024-04-26 16:19:24.132917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.287 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.287 [2024-04-26 16:19:24.205500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.287 [2024-04-26 16:19:24.290984] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.287 [2024-04-26 16:19:24.291029] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.287 [2024-04-26 16:19:24.291038] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.287 [2024-04-26 16:19:24.291047] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.287 [2024-04-26 16:19:24.291055] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.287 [2024-04-26 16:19:24.291121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.287 [2024-04-26 16:19:24.291208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.287 [2024-04-26 16:19:24.291290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.287 [2024-04-26 16:19:24.291291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.225 16:19:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:16.225 16:19:24 -- common/autotest_common.sh@850 -- # return 0 00:08:16.225 16:19:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:16.225 16:19:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:16.225 16:19:24 -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 16:19:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.225 16:19:24 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:16.225 16:19:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.225 16:19:24 -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 [2024-04-26 16:19:25.016897] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2166310/0x216a800) succeed. 00:08:16.225 [2024-04-26 16:19:25.027254] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2167950/0x21abe90) succeed. 00:08:16.225 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.225 16:19:25 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:16.225 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.225 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 [2024-04-26 16:19:25.157318] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:16.225 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.225 16:19:25 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:16.225 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.225 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.225 16:19:25 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:16.225 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.225 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.225 16:19:25 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:16.225 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.225 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.225 16:19:25 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.225 16:19:25 -- target/referrals.sh@48 -- # jq length 00:08:16.225 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.225 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.225 16:19:25 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:16.225 16:19:25 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:16.225 16:19:25 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:16.225 16:19:25 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.225 16:19:25 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:16.225 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.225 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.225 16:19:25 -- target/referrals.sh@21 -- # sort 00:08:16.485 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.485 16:19:25 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:16.485 16:19:25 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:16.485 16:19:25 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:16.485 16:19:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.485 16:19:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.485 16:19:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:16.485 16:19:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.485 16:19:25 -- target/referrals.sh@26 -- # sort 00:08:16.485 16:19:25 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:16.485 16:19:25 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:16.485 16:19:25 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:16.485 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.485 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.485 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.485 16:19:25 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:16.485 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.485 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.485 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.485 16:19:25 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:16.485 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.485 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.485 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.485 16:19:25 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.485 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.485 16:19:25 -- target/referrals.sh@56 -- # jq length 00:08:16.485 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.485 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.485 16:19:25 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:16.485 16:19:25 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:16.485 16:19:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.485 16:19:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.485 16:19:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:16.485 16:19:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.485 16:19:25 -- target/referrals.sh@26 -- # sort 00:08:16.485 16:19:25 -- target/referrals.sh@26 -- # echo 00:08:16.485 16:19:25 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:16.485 16:19:25 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:16.485 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.485 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.744 16:19:25 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:16.744 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.744 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.744 16:19:25 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:16.744 16:19:25 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:16.744 16:19:25 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:16.744 16:19:25 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:16.744 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:16.744 16:19:25 -- target/referrals.sh@21 -- # sort 00:08:16.744 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:16.744 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:16.744 16:19:25 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:16.744 16:19:25 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:16.744 16:19:25 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:16.744 16:19:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:16.744 16:19:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:16.744 16:19:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:16.744 16:19:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:16.744 16:19:25 -- target/referrals.sh@26 -- # sort 00:08:16.744 16:19:25 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:16.744 16:19:25 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:16.744 16:19:25 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:16.744 16:19:25 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:16.744 16:19:25 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:16.744 16:19:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:16.744 16:19:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:16.744 16:19:25 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:16.744 16:19:25 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:16.744 16:19:25 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:16.744 16:19:25 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:16.744 16:19:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:16.744 16:19:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:17.002 16:19:25 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:17.002 16:19:25 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:17.002 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.002 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.002 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.002 16:19:25 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:17.002 16:19:25 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:17.002 16:19:25 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.002 16:19:25 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:17.002 16:19:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.002 16:19:25 -- common/autotest_common.sh@10 -- # set +x 00:08:17.002 16:19:25 -- target/referrals.sh@21 -- # sort 00:08:17.002 16:19:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.002 16:19:25 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:17.002 16:19:25 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:17.002 16:19:25 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:17.002 16:19:25 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.002 16:19:25 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.002 16:19:25 -- target/referrals.sh@26 -- # sort 00:08:17.002 16:19:25 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:17.002 16:19:25 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.002 16:19:25 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:17.002 16:19:25 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:17.002 16:19:25 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:17.003 16:19:25 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:17.003 16:19:25 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:17.003 16:19:25 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:17.003 16:19:25 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:17.261 16:19:26 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:17.261 16:19:26 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:17.261 16:19:26 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:17.261 16:19:26 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:17.261 16:19:26 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:17.261 16:19:26 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:17.261 16:19:26 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:17.261 16:19:26 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:17.261 16:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.261 16:19:26 -- common/autotest_common.sh@10 -- # set +x 00:08:17.261 16:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.261 16:19:26 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:17.261 16:19:26 -- target/referrals.sh@82 -- # jq length 00:08:17.261 16:19:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:17.261 16:19:26 -- common/autotest_common.sh@10 -- # set +x 00:08:17.261 16:19:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:17.261 16:19:26 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:17.261 16:19:26 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:17.261 16:19:26 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:17.261 16:19:26 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:17.261 16:19:26 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:17.261 16:19:26 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:17.261 16:19:26 -- target/referrals.sh@26 -- # sort 00:08:17.261 16:19:26 -- target/referrals.sh@26 -- # echo 00:08:17.261 16:19:26 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:17.261 16:19:26 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:17.261 16:19:26 -- target/referrals.sh@86 -- # nvmftestfini 00:08:17.261 16:19:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:17.261 16:19:26 -- nvmf/common.sh@117 -- # sync 00:08:17.261 16:19:26 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:17.261 16:19:26 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:17.261 16:19:26 -- nvmf/common.sh@120 -- # set +e 00:08:17.261 16:19:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.261 16:19:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:17.261 rmmod nvme_rdma 00:08:17.520 rmmod nvme_fabrics 00:08:17.520 16:19:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.520 16:19:26 -- nvmf/common.sh@124 -- # set -e 00:08:17.521 16:19:26 -- nvmf/common.sh@125 -- # return 0 00:08:17.521 16:19:26 -- nvmf/common.sh@478 -- # '[' -n 366509 ']' 00:08:17.521 16:19:26 -- nvmf/common.sh@479 -- # killprocess 366509 00:08:17.521 16:19:26 -- common/autotest_common.sh@936 -- # '[' -z 366509 ']' 00:08:17.521 16:19:26 -- common/autotest_common.sh@940 -- # kill -0 366509 00:08:17.521 16:19:26 -- common/autotest_common.sh@941 -- # uname 00:08:17.521 16:19:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:17.521 16:19:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 366509 00:08:17.521 16:19:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:17.521 16:19:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:17.521 16:19:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 366509' 00:08:17.521 killing process with pid 366509 00:08:17.521 16:19:26 -- common/autotest_common.sh@955 -- # kill 366509 00:08:17.521 16:19:26 -- common/autotest_common.sh@960 -- # wait 366509 00:08:17.521 [2024-04-26 16:19:26.460967] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:08:17.781 16:19:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:17.781 16:19:26 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:08:17.781 00:08:17.781 real 0m9.284s 00:08:17.781 user 0m11.951s 00:08:17.781 sys 0m5.902s 00:08:17.781 16:19:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:17.781 16:19:26 -- common/autotest_common.sh@10 -- # set +x 00:08:17.781 ************************************ 00:08:17.781 END TEST nvmf_referrals 00:08:17.781 ************************************ 00:08:17.781 16:19:26 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:17.781 16:19:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:17.781 16:19:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.781 16:19:26 -- common/autotest_common.sh@10 -- # set +x 00:08:18.040 ************************************ 00:08:18.040 START TEST nvmf_connect_disconnect 00:08:18.040 ************************************ 00:08:18.040 16:19:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:18.040 * Looking for test storage... 00:08:18.040 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:18.040 16:19:26 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.040 16:19:26 -- nvmf/common.sh@7 -- # uname -s 00:08:18.040 16:19:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.040 16:19:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.040 16:19:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.040 16:19:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.040 16:19:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.040 16:19:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.040 16:19:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.040 16:19:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.040 16:19:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.040 16:19:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.040 16:19:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:08:18.040 16:19:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:08:18.040 16:19:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.040 16:19:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.040 16:19:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.040 16:19:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.040 16:19:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:18.040 16:19:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.040 16:19:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.040 16:19:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.040 16:19:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.040 16:19:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.040 16:19:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.040 16:19:27 -- paths/export.sh@5 -- # export PATH 00:08:18.040 16:19:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.040 16:19:27 -- nvmf/common.sh@47 -- # : 0 00:08:18.040 16:19:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.040 16:19:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.040 16:19:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.040 16:19:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.040 16:19:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.040 16:19:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.040 16:19:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.040 16:19:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.040 16:19:27 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:18.040 16:19:27 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:18.040 16:19:27 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:18.040 16:19:27 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:08:18.040 16:19:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.040 16:19:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:18.040 16:19:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:18.040 16:19:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:18.040 16:19:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.040 16:19:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.040 16:19:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.040 16:19:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:18.040 16:19:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:18.040 16:19:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.040 16:19:27 -- common/autotest_common.sh@10 -- # set +x 00:08:24.609 16:19:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:24.609 16:19:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.609 16:19:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.609 16:19:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.609 16:19:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.609 16:19:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.609 16:19:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.609 16:19:32 -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.609 16:19:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.609 16:19:32 -- nvmf/common.sh@296 -- # e810=() 00:08:24.609 16:19:32 -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.609 16:19:32 -- nvmf/common.sh@297 -- # x722=() 00:08:24.609 16:19:32 -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.609 16:19:32 -- nvmf/common.sh@298 -- # mlx=() 00:08:24.609 16:19:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.609 16:19:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.609 16:19:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.609 16:19:32 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:24.609 16:19:32 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:24.609 16:19:32 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:24.609 16:19:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.609 16:19:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:08:24.609 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:08:24.609 16:19:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:24.609 16:19:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:08:24.609 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:08:24.609 16:19:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:24.609 16:19:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.609 16:19:32 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.609 16:19:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:24.609 16:19:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.609 16:19:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:24.609 Found net devices under 0000:18:00.0: mlx_0_0 00:08:24.609 16:19:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.609 16:19:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.609 16:19:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:24.609 16:19:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.609 16:19:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:24.609 Found net devices under 0000:18:00.1: mlx_0_1 00:08:24.609 16:19:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.609 16:19:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:24.609 16:19:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:24.609 16:19:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@409 -- # rdma_device_init 00:08:24.609 16:19:32 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:08:24.609 16:19:32 -- nvmf/common.sh@58 -- # uname 00:08:24.609 16:19:32 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:24.609 16:19:32 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:24.609 16:19:32 -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:24.609 16:19:32 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:24.609 16:19:32 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:24.609 16:19:32 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:24.609 16:19:32 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:24.609 16:19:32 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:24.609 16:19:32 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:08:24.609 16:19:32 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:24.609 16:19:32 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:24.609 16:19:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:24.609 16:19:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:24.609 16:19:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:24.609 16:19:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:24.609 16:19:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:24.609 16:19:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:24.609 16:19:32 -- nvmf/common.sh@105 -- # continue 2 00:08:24.609 16:19:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.609 16:19:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:24.609 16:19:32 -- nvmf/common.sh@105 -- # continue 2 00:08:24.609 16:19:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:24.609 16:19:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:24.609 16:19:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:24.609 16:19:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:24.609 16:19:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:24.609 16:19:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:24.609 16:19:32 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:24.609 16:19:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:24.609 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:24.609 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:08:24.609 altname enp24s0f0np0 00:08:24.609 altname ens785f0np0 00:08:24.609 inet 192.168.100.8/24 scope global mlx_0_0 00:08:24.609 valid_lft forever preferred_lft forever 00:08:24.609 16:19:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:24.609 16:19:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:24.609 16:19:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:24.609 16:19:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:24.609 16:19:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:24.609 16:19:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:24.609 16:19:32 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:24.609 16:19:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:24.609 16:19:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:24.609 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:24.609 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:08:24.609 altname enp24s0f1np1 00:08:24.609 altname ens785f1np1 00:08:24.609 inet 192.168.100.9/24 scope global mlx_0_1 00:08:24.609 valid_lft forever preferred_lft forever 00:08:24.610 16:19:32 -- nvmf/common.sh@411 -- # return 0 00:08:24.610 16:19:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:24.610 16:19:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:24.610 16:19:32 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:08:24.610 16:19:32 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:08:24.610 16:19:32 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:24.610 16:19:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:24.610 16:19:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:24.610 16:19:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:24.610 16:19:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:24.610 16:19:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:24.610 16:19:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:24.610 16:19:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.610 16:19:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:24.610 16:19:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:24.610 16:19:32 -- nvmf/common.sh@105 -- # continue 2 00:08:24.610 16:19:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:24.610 16:19:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.610 16:19:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:24.610 16:19:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:24.610 16:19:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:24.610 16:19:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:24.610 16:19:32 -- nvmf/common.sh@105 -- # continue 2 00:08:24.610 16:19:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:24.610 16:19:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:24.610 16:19:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:24.610 16:19:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:24.610 16:19:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:24.610 16:19:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:24.610 16:19:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:24.610 16:19:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:24.610 16:19:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:24.610 16:19:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:24.610 16:19:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:24.610 16:19:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:24.610 16:19:32 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:08:24.610 192.168.100.9' 00:08:24.610 16:19:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:24.610 192.168.100.9' 00:08:24.610 16:19:32 -- nvmf/common.sh@446 -- # head -n 1 00:08:24.610 16:19:32 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:24.610 16:19:32 -- nvmf/common.sh@447 -- # head -n 1 00:08:24.610 16:19:32 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:08:24.610 192.168.100.9' 00:08:24.610 16:19:32 -- nvmf/common.sh@447 -- # tail -n +2 00:08:24.610 16:19:32 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:24.610 16:19:32 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:08:24.610 16:19:32 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:24.610 16:19:32 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:08:24.610 16:19:32 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:08:24.610 16:19:32 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:08:24.610 16:19:32 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:24.610 16:19:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:24.610 16:19:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:24.610 16:19:32 -- common/autotest_common.sh@10 -- # set +x 00:08:24.610 16:19:32 -- nvmf/common.sh@470 -- # nvmfpid=369889 00:08:24.610 16:19:32 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.610 16:19:32 -- nvmf/common.sh@471 -- # waitforlisten 369889 00:08:24.610 16:19:32 -- common/autotest_common.sh@817 -- # '[' -z 369889 ']' 00:08:24.610 16:19:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.610 16:19:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:24.610 16:19:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.610 16:19:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:24.610 16:19:32 -- common/autotest_common.sh@10 -- # set +x 00:08:24.610 [2024-04-26 16:19:32.952460] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:08:24.610 [2024-04-26 16:19:32.952517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.610 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.610 [2024-04-26 16:19:33.027393] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.610 [2024-04-26 16:19:33.113522] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.610 [2024-04-26 16:19:33.113564] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.610 [2024-04-26 16:19:33.113573] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.610 [2024-04-26 16:19:33.113597] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.610 [2024-04-26 16:19:33.113604] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.610 [2024-04-26 16:19:33.113654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.610 [2024-04-26 16:19:33.113739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.610 [2024-04-26 16:19:33.113818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.610 [2024-04-26 16:19:33.113820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.869 16:19:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:24.869 16:19:33 -- common/autotest_common.sh@850 -- # return 0 00:08:24.869 16:19:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:24.869 16:19:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:24.869 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:08:24.869 16:19:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.869 16:19:33 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:24.869 16:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:24.869 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:08:24.869 [2024-04-26 16:19:33.828287] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:24.869 [2024-04-26 16:19:33.848684] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24e3310/0x24e7800) succeed. 00:08:24.869 [2024-04-26 16:19:33.859017] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24e4950/0x2528e90) succeed. 00:08:25.129 16:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.129 16:19:33 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:25.129 16:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.129 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.129 16:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.129 16:19:33 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:25.129 16:19:33 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:25.129 16:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.129 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.129 16:19:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.129 16:19:33 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:25.129 16:19:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.129 16:19:33 -- common/autotest_common.sh@10 -- # set +x 00:08:25.129 16:19:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.129 16:19:34 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:25.129 16:19:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.129 16:19:34 -- common/autotest_common.sh@10 -- # set +x 00:08:25.129 [2024-04-26 16:19:34.008486] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:25.129 16:19:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.129 16:19:34 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:25.129 16:19:34 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:25.129 16:19:34 -- target/connect_disconnect.sh@34 -- # set +x 00:08:33.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.832 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.072 16:20:08 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:00.072 16:20:08 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:00.072 16:20:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:00.072 16:20:08 -- nvmf/common.sh@117 -- # sync 00:09:00.072 16:20:08 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:00.072 16:20:08 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:00.072 16:20:08 -- nvmf/common.sh@120 -- # set +e 00:09:00.072 16:20:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:00.072 16:20:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:00.072 rmmod nvme_rdma 00:09:00.072 rmmod nvme_fabrics 00:09:00.072 16:20:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:00.072 16:20:08 -- nvmf/common.sh@124 -- # set -e 00:09:00.072 16:20:08 -- nvmf/common.sh@125 -- # return 0 00:09:00.072 16:20:08 -- nvmf/common.sh@478 -- # '[' -n 369889 ']' 00:09:00.072 16:20:08 -- nvmf/common.sh@479 -- # killprocess 369889 00:09:00.072 16:20:08 -- common/autotest_common.sh@936 -- # '[' -z 369889 ']' 00:09:00.072 16:20:08 -- common/autotest_common.sh@940 -- # kill -0 369889 00:09:00.072 16:20:08 -- common/autotest_common.sh@941 -- # uname 00:09:00.072 16:20:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.072 16:20:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 369889 00:09:00.072 16:20:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.072 16:20:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.072 16:20:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 369889' 00:09:00.072 killing process with pid 369889 00:09:00.072 16:20:08 -- common/autotest_common.sh@955 -- # kill 369889 00:09:00.072 16:20:08 -- common/autotest_common.sh@960 -- # wait 369889 00:09:00.072 [2024-04-26 16:20:08.358316] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:09:00.072 16:20:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:00.072 16:20:08 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:00.072 00:09:00.072 real 0m41.698s 00:09:00.072 user 2m22.925s 00:09:00.072 sys 0m6.183s 00:09:00.072 16:20:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:00.072 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:09:00.072 ************************************ 00:09:00.072 END TEST nvmf_connect_disconnect 00:09:00.072 ************************************ 00:09:00.072 16:20:08 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:00.072 16:20:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:00.072 16:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:00.072 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:09:00.072 ************************************ 00:09:00.072 START TEST nvmf_multitarget 00:09:00.072 ************************************ 00:09:00.072 16:20:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:00.072 * Looking for test storage... 00:09:00.072 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:00.072 16:20:08 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.072 16:20:08 -- nvmf/common.sh@7 -- # uname -s 00:09:00.072 16:20:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.072 16:20:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.072 16:20:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.072 16:20:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.072 16:20:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.072 16:20:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.072 16:20:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.072 16:20:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.072 16:20:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.072 16:20:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.072 16:20:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:09:00.072 16:20:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:09:00.072 16:20:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.072 16:20:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.072 16:20:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.072 16:20:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.072 16:20:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:00.072 16:20:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.072 16:20:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.072 16:20:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.072 16:20:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.072 16:20:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.072 16:20:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.072 16:20:08 -- paths/export.sh@5 -- # export PATH 00:09:00.072 16:20:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.072 16:20:08 -- nvmf/common.sh@47 -- # : 0 00:09:00.072 16:20:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:00.072 16:20:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:00.072 16:20:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.072 16:20:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.072 16:20:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.072 16:20:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:00.072 16:20:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:00.072 16:20:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:00.072 16:20:08 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:00.072 16:20:08 -- target/multitarget.sh@15 -- # nvmftestinit 00:09:00.072 16:20:08 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:00.072 16:20:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.072 16:20:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:00.072 16:20:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:00.072 16:20:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:00.072 16:20:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.072 16:20:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.072 16:20:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.072 16:20:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:00.072 16:20:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:00.072 16:20:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:00.072 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:09:06.642 16:20:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:06.642 16:20:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:06.642 16:20:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:06.642 16:20:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:06.642 16:20:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:06.642 16:20:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:06.642 16:20:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:06.642 16:20:14 -- nvmf/common.sh@295 -- # net_devs=() 00:09:06.642 16:20:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:06.642 16:20:14 -- nvmf/common.sh@296 -- # e810=() 00:09:06.642 16:20:14 -- nvmf/common.sh@296 -- # local -ga e810 00:09:06.642 16:20:14 -- nvmf/common.sh@297 -- # x722=() 00:09:06.642 16:20:14 -- nvmf/common.sh@297 -- # local -ga x722 00:09:06.642 16:20:14 -- nvmf/common.sh@298 -- # mlx=() 00:09:06.642 16:20:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:06.643 16:20:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.643 16:20:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:06.643 16:20:14 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:06.643 16:20:14 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:06.643 16:20:14 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:06.643 16:20:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:06.643 16:20:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:09:06.643 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:09:06.643 16:20:14 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.643 16:20:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:09:06.643 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:09:06.643 16:20:14 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:06.643 16:20:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:06.643 16:20:14 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.643 16:20:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:06.643 16:20:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.643 16:20:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:06.643 Found net devices under 0000:18:00.0: mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.643 16:20:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.643 16:20:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:06.643 16:20:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.643 16:20:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:06.643 Found net devices under 0000:18:00.1: mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.643 16:20:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:06.643 16:20:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:06.643 16:20:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:06.643 16:20:14 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:06.643 16:20:14 -- nvmf/common.sh@58 -- # uname 00:09:06.643 16:20:14 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:06.643 16:20:14 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:06.643 16:20:14 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:06.643 16:20:14 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:06.643 16:20:14 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:06.643 16:20:14 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:06.643 16:20:14 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:06.643 16:20:14 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:06.643 16:20:14 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:06.643 16:20:14 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:06.643 16:20:14 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:06.643 16:20:14 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.643 16:20:14 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:06.643 16:20:14 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:06.643 16:20:14 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.643 16:20:14 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:06.643 16:20:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@105 -- # continue 2 00:09:06.643 16:20:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@105 -- # continue 2 00:09:06.643 16:20:14 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:06.643 16:20:14 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:06.643 16:20:14 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:06.643 16:20:14 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:06.643 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.643 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:09:06.643 altname enp24s0f0np0 00:09:06.643 altname ens785f0np0 00:09:06.643 inet 192.168.100.8/24 scope global mlx_0_0 00:09:06.643 valid_lft forever preferred_lft forever 00:09:06.643 16:20:14 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:06.643 16:20:14 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:06.643 16:20:14 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:06.643 16:20:14 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:06.643 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:06.643 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:09:06.643 altname enp24s0f1np1 00:09:06.643 altname ens785f1np1 00:09:06.643 inet 192.168.100.9/24 scope global mlx_0_1 00:09:06.643 valid_lft forever preferred_lft forever 00:09:06.643 16:20:14 -- nvmf/common.sh@411 -- # return 0 00:09:06.643 16:20:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:06.643 16:20:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:06.643 16:20:14 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:06.643 16:20:14 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:06.643 16:20:14 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:06.643 16:20:14 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:06.643 16:20:14 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:06.643 16:20:14 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:06.643 16:20:14 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:06.643 16:20:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@105 -- # continue 2 00:09:06.643 16:20:14 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:06.643 16:20:14 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:06.643 16:20:14 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@105 -- # continue 2 00:09:06.643 16:20:14 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:06.643 16:20:14 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:06.643 16:20:14 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:06.643 16:20:14 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:06.643 16:20:14 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:06.643 16:20:14 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:06.643 192.168.100.9' 00:09:06.643 16:20:14 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:06.643 192.168.100.9' 00:09:06.643 16:20:14 -- nvmf/common.sh@446 -- # head -n 1 00:09:06.643 16:20:14 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:06.643 16:20:14 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:06.643 192.168.100.9' 00:09:06.643 16:20:14 -- nvmf/common.sh@447 -- # tail -n +2 00:09:06.644 16:20:14 -- nvmf/common.sh@447 -- # head -n 1 00:09:06.644 16:20:14 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:06.644 16:20:14 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:06.644 16:20:14 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:06.644 16:20:14 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:06.644 16:20:15 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:06.644 16:20:15 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:06.644 16:20:15 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:06.644 16:20:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:06.644 16:20:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:06.644 16:20:15 -- common/autotest_common.sh@10 -- # set +x 00:09:06.644 16:20:15 -- nvmf/common.sh@470 -- # nvmfpid=377565 00:09:06.644 16:20:15 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:06.644 16:20:15 -- nvmf/common.sh@471 -- # waitforlisten 377565 00:09:06.644 16:20:15 -- common/autotest_common.sh@817 -- # '[' -z 377565 ']' 00:09:06.644 16:20:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.644 16:20:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:06.644 16:20:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.644 16:20:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:06.644 16:20:15 -- common/autotest_common.sh@10 -- # set +x 00:09:06.644 [2024-04-26 16:20:15.085199] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:09:06.644 [2024-04-26 16:20:15.085257] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.644 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.644 [2024-04-26 16:20:15.157890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.644 [2024-04-26 16:20:15.238885] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.644 [2024-04-26 16:20:15.238931] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.644 [2024-04-26 16:20:15.238939] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.644 [2024-04-26 16:20:15.238964] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.644 [2024-04-26 16:20:15.238971] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.644 [2024-04-26 16:20:15.239026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.644 [2024-04-26 16:20:15.239116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.644 [2024-04-26 16:20:15.239195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.644 [2024-04-26 16:20:15.239197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.902 16:20:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:06.902 16:20:15 -- common/autotest_common.sh@850 -- # return 0 00:09:06.902 16:20:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:06.902 16:20:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:06.902 16:20:15 -- common/autotest_common.sh@10 -- # set +x 00:09:07.161 16:20:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:07.161 16:20:15 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:07.161 16:20:15 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:07.161 16:20:15 -- target/multitarget.sh@21 -- # jq length 00:09:07.161 16:20:16 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:07.161 16:20:16 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:07.161 "nvmf_tgt_1" 00:09:07.161 16:20:16 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:07.420 "nvmf_tgt_2" 00:09:07.420 16:20:16 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:07.420 16:20:16 -- target/multitarget.sh@28 -- # jq length 00:09:07.420 16:20:16 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:07.420 16:20:16 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:07.680 true 00:09:07.680 16:20:16 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:07.680 true 00:09:07.680 16:20:16 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:07.680 16:20:16 -- target/multitarget.sh@35 -- # jq length 00:09:07.680 16:20:16 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:07.680 16:20:16 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:07.680 16:20:16 -- target/multitarget.sh@41 -- # nvmftestfini 00:09:07.680 16:20:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:07.680 16:20:16 -- nvmf/common.sh@117 -- # sync 00:09:07.680 16:20:16 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:07.680 16:20:16 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:07.680 16:20:16 -- nvmf/common.sh@120 -- # set +e 00:09:07.680 16:20:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.680 16:20:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:07.680 rmmod nvme_rdma 00:09:07.939 rmmod nvme_fabrics 00:09:07.939 16:20:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.939 16:20:16 -- nvmf/common.sh@124 -- # set -e 00:09:07.939 16:20:16 -- nvmf/common.sh@125 -- # return 0 00:09:07.939 16:20:16 -- nvmf/common.sh@478 -- # '[' -n 377565 ']' 00:09:07.939 16:20:16 -- nvmf/common.sh@479 -- # killprocess 377565 00:09:07.939 16:20:16 -- common/autotest_common.sh@936 -- # '[' -z 377565 ']' 00:09:07.939 16:20:16 -- common/autotest_common.sh@940 -- # kill -0 377565 00:09:07.939 16:20:16 -- common/autotest_common.sh@941 -- # uname 00:09:07.939 16:20:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.939 16:20:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 377565 00:09:07.939 16:20:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.939 16:20:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.939 16:20:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 377565' 00:09:07.939 killing process with pid 377565 00:09:07.939 16:20:16 -- common/autotest_common.sh@955 -- # kill 377565 00:09:07.939 16:20:16 -- common/autotest_common.sh@960 -- # wait 377565 00:09:08.199 16:20:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:08.199 16:20:17 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:09:08.199 00:09:08.199 real 0m8.206s 00:09:08.199 user 0m9.565s 00:09:08.199 sys 0m5.144s 00:09:08.199 16:20:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:08.199 16:20:17 -- common/autotest_common.sh@10 -- # set +x 00:09:08.199 ************************************ 00:09:08.199 END TEST nvmf_multitarget 00:09:08.199 ************************************ 00:09:08.199 16:20:17 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:08.199 16:20:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:08.199 16:20:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:08.199 16:20:17 -- common/autotest_common.sh@10 -- # set +x 00:09:08.458 ************************************ 00:09:08.458 START TEST nvmf_rpc 00:09:08.458 ************************************ 00:09:08.458 16:20:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:09:08.458 * Looking for test storage... 00:09:08.458 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:08.458 16:20:17 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.458 16:20:17 -- nvmf/common.sh@7 -- # uname -s 00:09:08.458 16:20:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.458 16:20:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.458 16:20:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.458 16:20:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.458 16:20:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.458 16:20:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.458 16:20:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.458 16:20:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.458 16:20:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.458 16:20:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.458 16:20:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:09:08.458 16:20:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:09:08.458 16:20:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.458 16:20:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.458 16:20:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.458 16:20:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.458 16:20:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:08.458 16:20:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.459 16:20:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.459 16:20:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.459 16:20:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.459 16:20:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.459 16:20:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.459 16:20:17 -- paths/export.sh@5 -- # export PATH 00:09:08.459 16:20:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.459 16:20:17 -- nvmf/common.sh@47 -- # : 0 00:09:08.459 16:20:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.459 16:20:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.459 16:20:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.459 16:20:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.459 16:20:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.459 16:20:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.459 16:20:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.459 16:20:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.459 16:20:17 -- target/rpc.sh@11 -- # loops=5 00:09:08.459 16:20:17 -- target/rpc.sh@23 -- # nvmftestinit 00:09:08.459 16:20:17 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:09:08.459 16:20:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.459 16:20:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:08.459 16:20:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:08.459 16:20:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:08.459 16:20:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.459 16:20:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.459 16:20:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.459 16:20:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:08.459 16:20:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:08.459 16:20:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.459 16:20:17 -- common/autotest_common.sh@10 -- # set +x 00:09:15.026 16:20:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:15.026 16:20:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.026 16:20:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.026 16:20:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.026 16:20:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.026 16:20:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.026 16:20:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.026 16:20:23 -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.026 16:20:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.026 16:20:23 -- nvmf/common.sh@296 -- # e810=() 00:09:15.026 16:20:23 -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.026 16:20:23 -- nvmf/common.sh@297 -- # x722=() 00:09:15.026 16:20:23 -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.026 16:20:23 -- nvmf/common.sh@298 -- # mlx=() 00:09:15.026 16:20:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.026 16:20:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.026 16:20:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.026 16:20:23 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:15.026 16:20:23 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:15.026 16:20:23 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:15.026 16:20:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.026 16:20:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.026 16:20:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:09:15.026 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:09:15.026 16:20:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:15.026 16:20:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.026 16:20:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:09:15.026 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:09:15.026 16:20:23 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:15.026 16:20:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.026 16:20:23 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:15.026 16:20:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.027 16:20:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:15.027 16:20:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.027 16:20:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:15.027 Found net devices under 0000:18:00.0: mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.027 16:20:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.027 16:20:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:15.027 16:20:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.027 16:20:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:15.027 Found net devices under 0000:18:00.1: mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.027 16:20:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:15.027 16:20:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:15.027 16:20:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@409 -- # rdma_device_init 00:09:15.027 16:20:23 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:09:15.027 16:20:23 -- nvmf/common.sh@58 -- # uname 00:09:15.027 16:20:23 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:15.027 16:20:23 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:15.027 16:20:23 -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:15.027 16:20:23 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:15.027 16:20:23 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:15.027 16:20:23 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:15.027 16:20:23 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:15.027 16:20:23 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:15.027 16:20:23 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:09:15.027 16:20:23 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:15.027 16:20:23 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:15.027 16:20:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:15.027 16:20:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:15.027 16:20:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:15.027 16:20:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:15.027 16:20:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:15.027 16:20:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@105 -- # continue 2 00:09:15.027 16:20:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@105 -- # continue 2 00:09:15.027 16:20:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:15.027 16:20:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:15.027 16:20:23 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:15.027 16:20:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:15.027 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:15.027 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:09:15.027 altname enp24s0f0np0 00:09:15.027 altname ens785f0np0 00:09:15.027 inet 192.168.100.8/24 scope global mlx_0_0 00:09:15.027 valid_lft forever preferred_lft forever 00:09:15.027 16:20:23 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:15.027 16:20:23 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:15.027 16:20:23 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:15.027 16:20:23 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:15.027 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:15.027 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:09:15.027 altname enp24s0f1np1 00:09:15.027 altname ens785f1np1 00:09:15.027 inet 192.168.100.9/24 scope global mlx_0_1 00:09:15.027 valid_lft forever preferred_lft forever 00:09:15.027 16:20:23 -- nvmf/common.sh@411 -- # return 0 00:09:15.027 16:20:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:15.027 16:20:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:15.027 16:20:23 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:09:15.027 16:20:23 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:15.027 16:20:23 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:15.027 16:20:23 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:15.027 16:20:23 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:15.027 16:20:23 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:15.027 16:20:23 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:15.027 16:20:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@105 -- # continue 2 00:09:15.027 16:20:23 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:15.027 16:20:23 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:15.027 16:20:23 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@105 -- # continue 2 00:09:15.027 16:20:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:15.027 16:20:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:15.027 16:20:23 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:15.027 16:20:23 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:15.027 16:20:23 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:15.027 16:20:23 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:09:15.027 192.168.100.9' 00:09:15.027 16:20:23 -- nvmf/common.sh@446 -- # head -n 1 00:09:15.027 16:20:23 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:15.027 192.168.100.9' 00:09:15.027 16:20:23 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:15.027 16:20:23 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:09:15.027 192.168.100.9' 00:09:15.027 16:20:23 -- nvmf/common.sh@447 -- # tail -n +2 00:09:15.027 16:20:23 -- nvmf/common.sh@447 -- # head -n 1 00:09:15.027 16:20:23 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:15.027 16:20:23 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:09:15.027 16:20:23 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:15.027 16:20:23 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:09:15.027 16:20:23 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:09:15.027 16:20:23 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:09:15.027 16:20:23 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:15.027 16:20:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:15.027 16:20:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:15.027 16:20:23 -- common/autotest_common.sh@10 -- # set +x 00:09:15.027 16:20:23 -- nvmf/common.sh@470 -- # nvmfpid=380729 00:09:15.027 16:20:23 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.027 16:20:23 -- nvmf/common.sh@471 -- # waitforlisten 380729 00:09:15.027 16:20:23 -- common/autotest_common.sh@817 -- # '[' -z 380729 ']' 00:09:15.027 16:20:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.027 16:20:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:15.027 16:20:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.027 16:20:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:15.027 16:20:23 -- common/autotest_common.sh@10 -- # set +x 00:09:15.027 [2024-04-26 16:20:23.952005] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:09:15.027 [2024-04-26 16:20:23.952061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.027 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.027 [2024-04-26 16:20:24.025075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.286 [2024-04-26 16:20:24.109383] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.286 [2024-04-26 16:20:24.109424] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.286 [2024-04-26 16:20:24.109434] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.286 [2024-04-26 16:20:24.109443] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.286 [2024-04-26 16:20:24.109451] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.286 [2024-04-26 16:20:24.109502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.286 [2024-04-26 16:20:24.109510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.286 [2024-04-26 16:20:24.109592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.286 [2024-04-26 16:20:24.109594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.852 16:20:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:15.852 16:20:24 -- common/autotest_common.sh@850 -- # return 0 00:09:15.852 16:20:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:15.852 16:20:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:15.852 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:09:15.852 16:20:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.852 16:20:24 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:15.852 16:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.852 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:09:15.852 16:20:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.852 16:20:24 -- target/rpc.sh@26 -- # stats='{ 00:09:15.852 "tick_rate": 2300000000, 00:09:15.852 "poll_groups": [ 00:09:15.852 { 00:09:15.852 "name": "nvmf_tgt_poll_group_0", 00:09:15.852 "admin_qpairs": 0, 00:09:15.852 "io_qpairs": 0, 00:09:15.852 "current_admin_qpairs": 0, 00:09:15.852 "current_io_qpairs": 0, 00:09:15.852 "pending_bdev_io": 0, 00:09:15.852 "completed_nvme_io": 0, 00:09:15.852 "transports": [] 00:09:15.852 }, 00:09:15.852 { 00:09:15.852 "name": "nvmf_tgt_poll_group_1", 00:09:15.852 "admin_qpairs": 0, 00:09:15.852 "io_qpairs": 0, 00:09:15.852 "current_admin_qpairs": 0, 00:09:15.852 "current_io_qpairs": 0, 00:09:15.852 "pending_bdev_io": 0, 00:09:15.852 "completed_nvme_io": 0, 00:09:15.852 "transports": [] 00:09:15.852 }, 00:09:15.852 { 00:09:15.852 "name": "nvmf_tgt_poll_group_2", 00:09:15.852 "admin_qpairs": 0, 00:09:15.853 "io_qpairs": 0, 00:09:15.853 "current_admin_qpairs": 0, 00:09:15.853 "current_io_qpairs": 0, 00:09:15.853 "pending_bdev_io": 0, 00:09:15.853 "completed_nvme_io": 0, 00:09:15.853 "transports": [] 00:09:15.853 }, 00:09:15.853 { 00:09:15.853 "name": "nvmf_tgt_poll_group_3", 00:09:15.853 "admin_qpairs": 0, 00:09:15.853 "io_qpairs": 0, 00:09:15.853 "current_admin_qpairs": 0, 00:09:15.853 "current_io_qpairs": 0, 00:09:15.853 "pending_bdev_io": 0, 00:09:15.853 "completed_nvme_io": 0, 00:09:15.853 "transports": [] 00:09:15.853 } 00:09:15.853 ] 00:09:15.853 }' 00:09:15.853 16:20:24 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:15.853 16:20:24 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:15.853 16:20:24 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:15.853 16:20:24 -- target/rpc.sh@15 -- # wc -l 00:09:16.112 16:20:24 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:16.112 16:20:24 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:16.112 16:20:24 -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:16.112 16:20:24 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:16.112 16:20:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.112 16:20:24 -- common/autotest_common.sh@10 -- # set +x 00:09:16.112 [2024-04-26 16:20:24.959005] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x210f360/0x2113850) succeed. 00:09:16.112 [2024-04-26 16:20:24.969317] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21109a0/0x2154ee0) succeed. 00:09:16.112 16:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.112 16:20:25 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:16.112 16:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.112 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.112 16:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.372 16:20:25 -- target/rpc.sh@33 -- # stats='{ 00:09:16.372 "tick_rate": 2300000000, 00:09:16.372 "poll_groups": [ 00:09:16.372 { 00:09:16.372 "name": "nvmf_tgt_poll_group_0", 00:09:16.372 "admin_qpairs": 0, 00:09:16.372 "io_qpairs": 0, 00:09:16.372 "current_admin_qpairs": 0, 00:09:16.372 "current_io_qpairs": 0, 00:09:16.372 "pending_bdev_io": 0, 00:09:16.372 "completed_nvme_io": 0, 00:09:16.372 "transports": [ 00:09:16.372 { 00:09:16.372 "trtype": "RDMA", 00:09:16.372 "pending_data_buffer": 0, 00:09:16.372 "devices": [ 00:09:16.372 { 00:09:16.372 "name": "mlx5_0", 00:09:16.372 "polls": 15206, 00:09:16.372 "idle_polls": 15206, 00:09:16.372 "completions": 0, 00:09:16.372 "requests": 0, 00:09:16.372 "request_latency": 0, 00:09:16.372 "pending_free_request": 0, 00:09:16.372 "pending_rdma_read": 0, 00:09:16.372 "pending_rdma_write": 0, 00:09:16.372 "pending_rdma_send": 0, 00:09:16.372 "total_send_wrs": 0, 00:09:16.372 "send_doorbell_updates": 0, 00:09:16.372 "total_recv_wrs": 4096, 00:09:16.372 "recv_doorbell_updates": 1 00:09:16.372 }, 00:09:16.372 { 00:09:16.372 "name": "mlx5_1", 00:09:16.372 "polls": 15206, 00:09:16.372 "idle_polls": 15206, 00:09:16.372 "completions": 0, 00:09:16.372 "requests": 0, 00:09:16.372 "request_latency": 0, 00:09:16.372 "pending_free_request": 0, 00:09:16.372 "pending_rdma_read": 0, 00:09:16.372 "pending_rdma_write": 0, 00:09:16.372 "pending_rdma_send": 0, 00:09:16.372 "total_send_wrs": 0, 00:09:16.372 "send_doorbell_updates": 0, 00:09:16.372 "total_recv_wrs": 4096, 00:09:16.372 "recv_doorbell_updates": 1 00:09:16.372 } 00:09:16.372 ] 00:09:16.372 } 00:09:16.372 ] 00:09:16.372 }, 00:09:16.372 { 00:09:16.372 "name": "nvmf_tgt_poll_group_1", 00:09:16.372 "admin_qpairs": 0, 00:09:16.372 "io_qpairs": 0, 00:09:16.372 "current_admin_qpairs": 0, 00:09:16.372 "current_io_qpairs": 0, 00:09:16.372 "pending_bdev_io": 0, 00:09:16.372 "completed_nvme_io": 0, 00:09:16.372 "transports": [ 00:09:16.372 { 00:09:16.372 "trtype": "RDMA", 00:09:16.372 "pending_data_buffer": 0, 00:09:16.372 "devices": [ 00:09:16.372 { 00:09:16.372 "name": "mlx5_0", 00:09:16.372 "polls": 9813, 00:09:16.372 "idle_polls": 9813, 00:09:16.372 "completions": 0, 00:09:16.372 "requests": 0, 00:09:16.372 "request_latency": 0, 00:09:16.372 "pending_free_request": 0, 00:09:16.372 "pending_rdma_read": 0, 00:09:16.372 "pending_rdma_write": 0, 00:09:16.372 "pending_rdma_send": 0, 00:09:16.372 "total_send_wrs": 0, 00:09:16.372 "send_doorbell_updates": 0, 00:09:16.372 "total_recv_wrs": 4096, 00:09:16.372 "recv_doorbell_updates": 1 00:09:16.372 }, 00:09:16.372 { 00:09:16.372 "name": "mlx5_1", 00:09:16.372 "polls": 9813, 00:09:16.372 "idle_polls": 9813, 00:09:16.372 "completions": 0, 00:09:16.372 "requests": 0, 00:09:16.372 "request_latency": 0, 00:09:16.372 "pending_free_request": 0, 00:09:16.372 "pending_rdma_read": 0, 00:09:16.372 "pending_rdma_write": 0, 00:09:16.372 "pending_rdma_send": 0, 00:09:16.372 "total_send_wrs": 0, 00:09:16.372 "send_doorbell_updates": 0, 00:09:16.372 "total_recv_wrs": 4096, 00:09:16.372 "recv_doorbell_updates": 1 00:09:16.372 } 00:09:16.372 ] 00:09:16.372 } 00:09:16.372 ] 00:09:16.372 }, 00:09:16.372 { 00:09:16.372 "name": "nvmf_tgt_poll_group_2", 00:09:16.372 "admin_qpairs": 0, 00:09:16.372 "io_qpairs": 0, 00:09:16.372 "current_admin_qpairs": 0, 00:09:16.372 "current_io_qpairs": 0, 00:09:16.372 "pending_bdev_io": 0, 00:09:16.372 "completed_nvme_io": 0, 00:09:16.372 "transports": [ 00:09:16.372 { 00:09:16.372 "trtype": "RDMA", 00:09:16.372 "pending_data_buffer": 0, 00:09:16.372 "devices": [ 00:09:16.372 { 00:09:16.372 "name": "mlx5_0", 00:09:16.372 "polls": 5338, 00:09:16.372 "idle_polls": 5338, 00:09:16.372 "completions": 0, 00:09:16.372 "requests": 0, 00:09:16.372 "request_latency": 0, 00:09:16.372 "pending_free_request": 0, 00:09:16.372 "pending_rdma_read": 0, 00:09:16.372 "pending_rdma_write": 0, 00:09:16.372 "pending_rdma_send": 0, 00:09:16.372 "total_send_wrs": 0, 00:09:16.372 "send_doorbell_updates": 0, 00:09:16.372 "total_recv_wrs": 4096, 00:09:16.372 "recv_doorbell_updates": 1 00:09:16.372 }, 00:09:16.372 { 00:09:16.372 "name": "mlx5_1", 00:09:16.372 "polls": 5338, 00:09:16.372 "idle_polls": 5338, 00:09:16.372 "completions": 0, 00:09:16.372 "requests": 0, 00:09:16.372 "request_latency": 0, 00:09:16.372 "pending_free_request": 0, 00:09:16.372 "pending_rdma_read": 0, 00:09:16.372 "pending_rdma_write": 0, 00:09:16.372 "pending_rdma_send": 0, 00:09:16.372 "total_send_wrs": 0, 00:09:16.372 "send_doorbell_updates": 0, 00:09:16.372 "total_recv_wrs": 4096, 00:09:16.372 "recv_doorbell_updates": 1 00:09:16.372 } 00:09:16.372 ] 00:09:16.372 } 00:09:16.372 ] 00:09:16.372 }, 00:09:16.372 { 00:09:16.372 "name": "nvmf_tgt_poll_group_3", 00:09:16.372 "admin_qpairs": 0, 00:09:16.372 "io_qpairs": 0, 00:09:16.372 "current_admin_qpairs": 0, 00:09:16.372 "current_io_qpairs": 0, 00:09:16.372 "pending_bdev_io": 0, 00:09:16.372 "completed_nvme_io": 0, 00:09:16.372 "transports": [ 00:09:16.372 { 00:09:16.372 "trtype": "RDMA", 00:09:16.372 "pending_data_buffer": 0, 00:09:16.372 "devices": [ 00:09:16.372 { 00:09:16.372 "name": "mlx5_0", 00:09:16.372 "polls": 865, 00:09:16.372 "idle_polls": 865, 00:09:16.372 "completions": 0, 00:09:16.372 "requests": 0, 00:09:16.372 "request_latency": 0, 00:09:16.372 "pending_free_request": 0, 00:09:16.372 "pending_rdma_read": 0, 00:09:16.372 "pending_rdma_write": 0, 00:09:16.373 "pending_rdma_send": 0, 00:09:16.373 "total_send_wrs": 0, 00:09:16.373 "send_doorbell_updates": 0, 00:09:16.373 "total_recv_wrs": 4096, 00:09:16.373 "recv_doorbell_updates": 1 00:09:16.373 }, 00:09:16.373 { 00:09:16.373 "name": "mlx5_1", 00:09:16.373 "polls": 865, 00:09:16.373 "idle_polls": 865, 00:09:16.373 "completions": 0, 00:09:16.373 "requests": 0, 00:09:16.373 "request_latency": 0, 00:09:16.373 "pending_free_request": 0, 00:09:16.373 "pending_rdma_read": 0, 00:09:16.373 "pending_rdma_write": 0, 00:09:16.373 "pending_rdma_send": 0, 00:09:16.373 "total_send_wrs": 0, 00:09:16.373 "send_doorbell_updates": 0, 00:09:16.373 "total_recv_wrs": 4096, 00:09:16.373 "recv_doorbell_updates": 1 00:09:16.373 } 00:09:16.373 ] 00:09:16.373 } 00:09:16.373 ] 00:09:16.373 } 00:09:16.373 ] 00:09:16.373 }' 00:09:16.373 16:20:25 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:16.373 16:20:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:16.373 16:20:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:16.373 16:20:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:16.373 16:20:25 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:16.373 16:20:25 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:16.373 16:20:25 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:16.373 16:20:25 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:16.373 16:20:25 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:16.373 16:20:25 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:16.373 16:20:25 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:09:16.373 16:20:25 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:09:16.373 16:20:25 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:09:16.373 16:20:25 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:09:16.373 16:20:25 -- target/rpc.sh@15 -- # wc -l 00:09:16.373 16:20:25 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:09:16.373 16:20:25 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:09:16.373 16:20:25 -- target/rpc.sh@41 -- # transport_type=RDMA 00:09:16.373 16:20:25 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:09:16.373 16:20:25 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:09:16.373 16:20:25 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:09:16.373 16:20:25 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:09:16.373 16:20:25 -- target/rpc.sh@15 -- # wc -l 00:09:16.373 16:20:25 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:09:16.373 16:20:25 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:16.373 16:20:25 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:16.373 16:20:25 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:16.373 16:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.373 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.373 Malloc1 00:09:16.373 16:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.373 16:20:25 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:16.373 16:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.373 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.373 16:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.373 16:20:25 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:16.373 16:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.373 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.373 16:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.373 16:20:25 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:16.373 16:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.373 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.373 16:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.373 16:20:25 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:16.373 16:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.373 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.373 [2024-04-26 16:20:25.387960] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:16.373 16:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.373 16:20:25 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -s 4420 00:09:16.373 16:20:25 -- common/autotest_common.sh@638 -- # local es=0 00:09:16.373 16:20:25 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -s 4420 00:09:16.373 16:20:25 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:16.373 16:20:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:16.373 16:20:25 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:16.632 16:20:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:16.632 16:20:25 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:16.632 16:20:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:16.632 16:20:25 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:16.632 16:20:25 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:16.632 16:20:25 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -s 4420 00:09:16.632 [2024-04-26 16:20:25.423598] ctrlr.c: 780:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c' 00:09:16.632 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:16.632 could not add new controller: failed to write to nvme-fabrics device 00:09:16.632 16:20:25 -- common/autotest_common.sh@641 -- # es=1 00:09:16.632 16:20:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:16.632 16:20:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:16.632 16:20:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:16.632 16:20:25 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:09:16.632 16:20:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.632 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:09:16.632 16:20:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.632 16:20:25 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:18.011 16:20:27 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.011 16:20:27 -- common/autotest_common.sh@1184 -- # local i=0 00:09:18.011 16:20:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.011 16:20:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:18.011 16:20:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:20.545 16:20:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:20.545 16:20:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:20.545 16:20:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.545 16:20:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:20.545 16:20:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.545 16:20:29 -- common/autotest_common.sh@1194 -- # return 0 00:09:20.545 16:20:29 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:23.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.831 16:20:32 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:23.831 16:20:32 -- common/autotest_common.sh@1205 -- # local i=0 00:09:23.831 16:20:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:23.831 16:20:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.831 16:20:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:23.831 16:20:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:23.831 16:20:32 -- common/autotest_common.sh@1217 -- # return 0 00:09:23.831 16:20:32 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:09:23.831 16:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.831 16:20:32 -- common/autotest_common.sh@10 -- # set +x 00:09:23.831 16:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.831 16:20:32 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:23.831 16:20:32 -- common/autotest_common.sh@638 -- # local es=0 00:09:23.831 16:20:32 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:23.831 16:20:32 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:23.831 16:20:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:23.831 16:20:32 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:23.831 16:20:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:23.831 16:20:32 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:23.831 16:20:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:23.831 16:20:32 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:23.831 16:20:32 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:23.831 16:20:32 -- common/autotest_common.sh@641 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:23.831 [2024-04-26 16:20:32.333691] ctrlr.c: 780:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c' 00:09:23.831 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:23.831 could not add new controller: failed to write to nvme-fabrics device 00:09:23.831 16:20:32 -- common/autotest_common.sh@641 -- # es=1 00:09:23.831 16:20:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:23.831 16:20:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:23.831 16:20:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:23.831 16:20:32 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:23.831 16:20:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:23.831 16:20:32 -- common/autotest_common.sh@10 -- # set +x 00:09:23.831 16:20:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:23.831 16:20:32 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:25.268 16:20:33 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:25.268 16:20:33 -- common/autotest_common.sh@1184 -- # local i=0 00:09:25.268 16:20:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:25.268 16:20:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:25.268 16:20:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:27.171 16:20:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:27.171 16:20:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:27.171 16:20:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:27.171 16:20:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:27.171 16:20:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:27.171 16:20:35 -- common/autotest_common.sh@1194 -- # return 0 00:09:27.171 16:20:35 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.458 16:20:39 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.458 16:20:39 -- common/autotest_common.sh@1205 -- # local i=0 00:09:30.458 16:20:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:30.459 16:20:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.459 16:20:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:30.459 16:20:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.459 16:20:39 -- common/autotest_common.sh@1217 -- # return 0 00:09:30.459 16:20:39 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.459 16:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.459 16:20:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.459 16:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.459 16:20:39 -- target/rpc.sh@81 -- # seq 1 5 00:09:30.459 16:20:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:30.459 16:20:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.459 16:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.459 16:20:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.459 16:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.459 16:20:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:30.459 16:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.459 16:20:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.459 [2024-04-26 16:20:39.237525] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:30.459 16:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.459 16:20:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:30.459 16:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.459 16:20:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.459 16:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.459 16:20:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.459 16:20:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.459 16:20:39 -- common/autotest_common.sh@10 -- # set +x 00:09:30.459 16:20:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.459 16:20:39 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:31.835 16:20:40 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.835 16:20:40 -- common/autotest_common.sh@1184 -- # local i=0 00:09:31.835 16:20:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.835 16:20:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:31.835 16:20:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:34.367 16:20:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:34.367 16:20:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:34.367 16:20:42 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.367 16:20:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:34.367 16:20:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.367 16:20:42 -- common/autotest_common.sh@1194 -- # return 0 00:09:34.367 16:20:42 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:37.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.657 16:20:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:37.657 16:20:46 -- common/autotest_common.sh@1205 -- # local i=0 00:09:37.657 16:20:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:37.657 16:20:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.657 16:20:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:37.657 16:20:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:37.657 16:20:46 -- common/autotest_common.sh@1217 -- # return 0 00:09:37.657 16:20:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:37.657 16:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.657 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.657 16:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.657 16:20:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.657 16:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.657 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.657 16:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.657 16:20:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:37.657 16:20:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:37.657 16:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.657 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.657 16:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.657 16:20:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:37.657 16:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.657 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.657 [2024-04-26 16:20:46.126868] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:37.657 16:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.657 16:20:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:37.657 16:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.657 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.657 16:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.657 16:20:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:37.657 16:20:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:37.657 16:20:46 -- common/autotest_common.sh@10 -- # set +x 00:09:37.657 16:20:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:37.657 16:20:46 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:39.036 16:20:47 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.036 16:20:47 -- common/autotest_common.sh@1184 -- # local i=0 00:09:39.036 16:20:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.036 16:20:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:39.036 16:20:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:40.942 16:20:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:40.942 16:20:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:40.942 16:20:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:40.942 16:20:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:40.942 16:20:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:40.942 16:20:49 -- common/autotest_common.sh@1194 -- # return 0 00:09:40.942 16:20:49 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.229 16:20:52 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.229 16:20:52 -- common/autotest_common.sh@1205 -- # local i=0 00:09:44.229 16:20:52 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:44.229 16:20:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.229 16:20:52 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:44.229 16:20:52 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.229 16:20:52 -- common/autotest_common.sh@1217 -- # return 0 00:09:44.229 16:20:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:44.229 16:20:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.229 16:20:52 -- common/autotest_common.sh@10 -- # set +x 00:09:44.229 16:20:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.229 16:20:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.229 16:20:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.229 16:20:52 -- common/autotest_common.sh@10 -- # set +x 00:09:44.229 16:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.229 16:20:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:44.229 16:20:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:44.229 16:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.229 16:20:53 -- common/autotest_common.sh@10 -- # set +x 00:09:44.229 16:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.229 16:20:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:44.229 16:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.229 16:20:53 -- common/autotest_common.sh@10 -- # set +x 00:09:44.229 [2024-04-26 16:20:53.013243] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:44.229 16:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.229 16:20:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:44.229 16:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.229 16:20:53 -- common/autotest_common.sh@10 -- # set +x 00:09:44.229 16:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.229 16:20:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:44.229 16:20:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:44.229 16:20:53 -- common/autotest_common.sh@10 -- # set +x 00:09:44.229 16:20:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:44.229 16:20:53 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:45.604 16:20:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:45.604 16:20:54 -- common/autotest_common.sh@1184 -- # local i=0 00:09:45.604 16:20:54 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:45.604 16:20:54 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:45.604 16:20:54 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:48.138 16:20:56 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:48.138 16:20:56 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:48.138 16:20:56 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:48.138 16:20:56 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:48.138 16:20:56 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:48.138 16:20:56 -- common/autotest_common.sh@1194 -- # return 0 00:09:48.138 16:20:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:51.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.426 16:20:59 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:51.426 16:20:59 -- common/autotest_common.sh@1205 -- # local i=0 00:09:51.426 16:20:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:51.426 16:20:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.426 16:20:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:51.426 16:20:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:51.426 16:20:59 -- common/autotest_common.sh@1217 -- # return 0 00:09:51.426 16:20:59 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.426 16:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.426 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.426 16:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.426 16:20:59 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.426 16:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.426 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.426 16:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.426 16:20:59 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:51.426 16:20:59 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:51.426 16:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.426 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.426 16:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.426 16:20:59 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:51.426 16:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.426 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.426 [2024-04-26 16:20:59.930065] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:51.426 16:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.426 16:20:59 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:51.426 16:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.426 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.426 16:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.426 16:20:59 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:51.426 16:20:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.426 16:20:59 -- common/autotest_common.sh@10 -- # set +x 00:09:51.426 16:20:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.426 16:20:59 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:52.804 16:21:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.804 16:21:01 -- common/autotest_common.sh@1184 -- # local i=0 00:09:52.804 16:21:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.804 16:21:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:52.804 16:21:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:54.710 16:21:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:54.710 16:21:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:54.710 16:21:03 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.710 16:21:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:54.710 16:21:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.710 16:21:03 -- common/autotest_common.sh@1194 -- # return 0 00:09:54.710 16:21:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.997 16:21:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.997 16:21:06 -- common/autotest_common.sh@1205 -- # local i=0 00:09:57.997 16:21:06 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:57.997 16:21:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.997 16:21:06 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:57.997 16:21:06 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.997 16:21:06 -- common/autotest_common.sh@1217 -- # return 0 00:09:57.997 16:21:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:57.997 16:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.997 16:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.997 16:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.997 16:21:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.997 16:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.997 16:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.997 16:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.997 16:21:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:57.997 16:21:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:57.997 16:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.997 16:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.997 16:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.997 16:21:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:57.997 16:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.997 16:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.997 [2024-04-26 16:21:06.796316] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:57.997 16:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.997 16:21:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:57.997 16:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.997 16:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.997 16:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.997 16:21:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:57.997 16:21:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.997 16:21:06 -- common/autotest_common.sh@10 -- # set +x 00:09:57.997 16:21:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.997 16:21:06 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:59.374 16:21:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:59.374 16:21:08 -- common/autotest_common.sh@1184 -- # local i=0 00:09:59.374 16:21:08 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:59.374 16:21:08 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:59.374 16:21:08 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:01.908 16:21:10 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:01.908 16:21:10 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:01.908 16:21:10 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:01.908 16:21:10 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:01.908 16:21:10 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:01.908 16:21:10 -- common/autotest_common.sh@1194 -- # return 0 00:10:01.908 16:21:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:05.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.207 16:21:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:05.207 16:21:13 -- common/autotest_common.sh@1205 -- # local i=0 00:10:05.207 16:21:13 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:05.207 16:21:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.207 16:21:13 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:05.207 16:21:13 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:05.207 16:21:13 -- common/autotest_common.sh@1217 -- # return 0 00:10:05.207 16:21:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.207 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.207 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.207 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.207 16:21:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.207 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.207 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.207 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.207 16:21:13 -- target/rpc.sh@99 -- # seq 1 5 00:10:05.207 16:21:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.207 16:21:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.207 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.207 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.207 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.207 16:21:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:05.207 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.207 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.207 [2024-04-26 16:21:13.683017] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:05.207 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.207 16:21:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.207 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.207 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.207 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.207 16:21:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.207 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.207 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.207 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.208 16:21:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 [2024-04-26 16:21:13.731487] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.208 16:21:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 [2024-04-26 16:21:13.779644] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.208 16:21:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 [2024-04-26 16:21:13.831861] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:05.208 16:21:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 [2024-04-26 16:21:13.880024] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:05.208 16:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:05.208 16:21:13 -- common/autotest_common.sh@10 -- # set +x 00:10:05.208 16:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:05.208 16:21:13 -- target/rpc.sh@110 -- # stats='{ 00:10:05.208 "tick_rate": 2300000000, 00:10:05.208 "poll_groups": [ 00:10:05.208 { 00:10:05.208 "name": "nvmf_tgt_poll_group_0", 00:10:05.208 "admin_qpairs": 2, 00:10:05.208 "io_qpairs": 27, 00:10:05.208 "current_admin_qpairs": 0, 00:10:05.208 "current_io_qpairs": 0, 00:10:05.208 "pending_bdev_io": 0, 00:10:05.209 "completed_nvme_io": 78, 00:10:05.209 "transports": [ 00:10:05.209 { 00:10:05.209 "trtype": "RDMA", 00:10:05.209 "pending_data_buffer": 0, 00:10:05.209 "devices": [ 00:10:05.209 { 00:10:05.209 "name": "mlx5_0", 00:10:05.209 "polls": 5913338, 00:10:05.209 "idle_polls": 5913080, 00:10:05.209 "completions": 281, 00:10:05.209 "requests": 140, 00:10:05.209 "request_latency": 24108204, 00:10:05.209 "pending_free_request": 0, 00:10:05.209 "pending_rdma_read": 0, 00:10:05.209 "pending_rdma_write": 0, 00:10:05.209 "pending_rdma_send": 0, 00:10:05.209 "total_send_wrs": 223, 00:10:05.209 "send_doorbell_updates": 129, 00:10:05.209 "total_recv_wrs": 4236, 00:10:05.209 "recv_doorbell_updates": 129 00:10:05.209 }, 00:10:05.209 { 00:10:05.209 "name": "mlx5_1", 00:10:05.209 "polls": 5913338, 00:10:05.209 "idle_polls": 5913338, 00:10:05.209 "completions": 0, 00:10:05.209 "requests": 0, 00:10:05.209 "request_latency": 0, 00:10:05.209 "pending_free_request": 0, 00:10:05.209 "pending_rdma_read": 0, 00:10:05.209 "pending_rdma_write": 0, 00:10:05.209 "pending_rdma_send": 0, 00:10:05.209 "total_send_wrs": 0, 00:10:05.209 "send_doorbell_updates": 0, 00:10:05.209 "total_recv_wrs": 4096, 00:10:05.209 "recv_doorbell_updates": 1 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 }, 00:10:05.209 { 00:10:05.209 "name": "nvmf_tgt_poll_group_1", 00:10:05.209 "admin_qpairs": 2, 00:10:05.209 "io_qpairs": 26, 00:10:05.209 "current_admin_qpairs": 0, 00:10:05.209 "current_io_qpairs": 0, 00:10:05.209 "pending_bdev_io": 0, 00:10:05.209 "completed_nvme_io": 76, 00:10:05.209 "transports": [ 00:10:05.209 { 00:10:05.209 "trtype": "RDMA", 00:10:05.209 "pending_data_buffer": 0, 00:10:05.209 "devices": [ 00:10:05.209 { 00:10:05.209 "name": "mlx5_0", 00:10:05.209 "polls": 5851731, 00:10:05.209 "idle_polls": 5851481, 00:10:05.209 "completions": 270, 00:10:05.209 "requests": 135, 00:10:05.209 "request_latency": 20813624, 00:10:05.209 "pending_free_request": 0, 00:10:05.209 "pending_rdma_read": 0, 00:10:05.209 "pending_rdma_write": 0, 00:10:05.209 "pending_rdma_send": 0, 00:10:05.209 "total_send_wrs": 214, 00:10:05.209 "send_doorbell_updates": 125, 00:10:05.209 "total_recv_wrs": 4231, 00:10:05.209 "recv_doorbell_updates": 126 00:10:05.209 }, 00:10:05.209 { 00:10:05.209 "name": "mlx5_1", 00:10:05.209 "polls": 5851731, 00:10:05.209 "idle_polls": 5851731, 00:10:05.209 "completions": 0, 00:10:05.209 "requests": 0, 00:10:05.209 "request_latency": 0, 00:10:05.209 "pending_free_request": 0, 00:10:05.209 "pending_rdma_read": 0, 00:10:05.209 "pending_rdma_write": 0, 00:10:05.209 "pending_rdma_send": 0, 00:10:05.209 "total_send_wrs": 0, 00:10:05.209 "send_doorbell_updates": 0, 00:10:05.209 "total_recv_wrs": 4096, 00:10:05.209 "recv_doorbell_updates": 1 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 }, 00:10:05.209 { 00:10:05.209 "name": "nvmf_tgt_poll_group_2", 00:10:05.209 "admin_qpairs": 1, 00:10:05.209 "io_qpairs": 26, 00:10:05.209 "current_admin_qpairs": 0, 00:10:05.209 "current_io_qpairs": 0, 00:10:05.209 "pending_bdev_io": 0, 00:10:05.209 "completed_nvme_io": 270, 00:10:05.209 "transports": [ 00:10:05.209 { 00:10:05.209 "trtype": "RDMA", 00:10:05.209 "pending_data_buffer": 0, 00:10:05.209 "devices": [ 00:10:05.209 { 00:10:05.209 "name": "mlx5_0", 00:10:05.209 "polls": 5719277, 00:10:05.209 "idle_polls": 5718778, 00:10:05.209 "completions": 601, 00:10:05.209 "requests": 300, 00:10:05.209 "request_latency": 73750938, 00:10:05.209 "pending_free_request": 0, 00:10:05.209 "pending_rdma_read": 0, 00:10:05.209 "pending_rdma_write": 0, 00:10:05.209 "pending_rdma_send": 0, 00:10:05.209 "total_send_wrs": 560, 00:10:05.209 "send_doorbell_updates": 243, 00:10:05.209 "total_recv_wrs": 4396, 00:10:05.209 "recv_doorbell_updates": 243 00:10:05.209 }, 00:10:05.209 { 00:10:05.209 "name": "mlx5_1", 00:10:05.209 "polls": 5719277, 00:10:05.209 "idle_polls": 5719277, 00:10:05.209 "completions": 0, 00:10:05.209 "requests": 0, 00:10:05.209 "request_latency": 0, 00:10:05.209 "pending_free_request": 0, 00:10:05.209 "pending_rdma_read": 0, 00:10:05.209 "pending_rdma_write": 0, 00:10:05.209 "pending_rdma_send": 0, 00:10:05.209 "total_send_wrs": 0, 00:10:05.209 "send_doorbell_updates": 0, 00:10:05.209 "total_recv_wrs": 4096, 00:10:05.209 "recv_doorbell_updates": 1 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 }, 00:10:05.209 { 00:10:05.209 "name": "nvmf_tgt_poll_group_3", 00:10:05.209 "admin_qpairs": 2, 00:10:05.209 "io_qpairs": 26, 00:10:05.209 "current_admin_qpairs": 0, 00:10:05.209 "current_io_qpairs": 0, 00:10:05.209 "pending_bdev_io": 0, 00:10:05.209 "completed_nvme_io": 31, 00:10:05.209 "transports": [ 00:10:05.209 { 00:10:05.209 "trtype": "RDMA", 00:10:05.209 "pending_data_buffer": 0, 00:10:05.209 "devices": [ 00:10:05.209 { 00:10:05.209 "name": "mlx5_0", 00:10:05.209 "polls": 4623313, 00:10:05.209 "idle_polls": 4623138, 00:10:05.209 "completions": 178, 00:10:05.209 "requests": 89, 00:10:05.209 "request_latency": 10049992, 00:10:05.209 "pending_free_request": 0, 00:10:05.209 "pending_rdma_read": 0, 00:10:05.209 "pending_rdma_write": 0, 00:10:05.209 "pending_rdma_send": 0, 00:10:05.209 "total_send_wrs": 123, 00:10:05.209 "send_doorbell_updates": 89, 00:10:05.209 "total_recv_wrs": 4185, 00:10:05.209 "recv_doorbell_updates": 90 00:10:05.209 }, 00:10:05.209 { 00:10:05.209 "name": "mlx5_1", 00:10:05.209 "polls": 4623313, 00:10:05.209 "idle_polls": 4623313, 00:10:05.209 "completions": 0, 00:10:05.209 "requests": 0, 00:10:05.209 "request_latency": 0, 00:10:05.209 "pending_free_request": 0, 00:10:05.209 "pending_rdma_read": 0, 00:10:05.209 "pending_rdma_write": 0, 00:10:05.209 "pending_rdma_send": 0, 00:10:05.209 "total_send_wrs": 0, 00:10:05.209 "send_doorbell_updates": 0, 00:10:05.209 "total_recv_wrs": 4096, 00:10:05.209 "recv_doorbell_updates": 1 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 } 00:10:05.209 ] 00:10:05.209 }' 00:10:05.209 16:21:13 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:05.209 16:21:13 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:05.209 16:21:13 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:05.209 16:21:13 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:05.209 16:21:14 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:05.209 16:21:14 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:05.209 16:21:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:05.209 16:21:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:05.209 16:21:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:05.209 16:21:14 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:10:05.209 16:21:14 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:10:05.209 16:21:14 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:10:05.209 16:21:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:10:05.209 16:21:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:10:05.209 16:21:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:05.210 16:21:14 -- target/rpc.sh@117 -- # (( 1330 > 0 )) 00:10:05.210 16:21:14 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:10:05.210 16:21:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:10:05.210 16:21:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:10:05.210 16:21:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:05.210 16:21:14 -- target/rpc.sh@118 -- # (( 128722758 > 0 )) 00:10:05.210 16:21:14 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:05.210 16:21:14 -- target/rpc.sh@123 -- # nvmftestfini 00:10:05.210 16:21:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:05.210 16:21:14 -- nvmf/common.sh@117 -- # sync 00:10:05.210 16:21:14 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:05.210 16:21:14 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:05.210 16:21:14 -- nvmf/common.sh@120 -- # set +e 00:10:05.210 16:21:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.210 16:21:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:05.210 rmmod nvme_rdma 00:10:05.210 rmmod nvme_fabrics 00:10:05.210 16:21:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.210 16:21:14 -- nvmf/common.sh@124 -- # set -e 00:10:05.210 16:21:14 -- nvmf/common.sh@125 -- # return 0 00:10:05.210 16:21:14 -- nvmf/common.sh@478 -- # '[' -n 380729 ']' 00:10:05.210 16:21:14 -- nvmf/common.sh@479 -- # killprocess 380729 00:10:05.210 16:21:14 -- common/autotest_common.sh@936 -- # '[' -z 380729 ']' 00:10:05.210 16:21:14 -- common/autotest_common.sh@940 -- # kill -0 380729 00:10:05.210 16:21:14 -- common/autotest_common.sh@941 -- # uname 00:10:05.210 16:21:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:05.210 16:21:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 380729 00:10:05.469 16:21:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:05.469 16:21:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:05.469 16:21:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 380729' 00:10:05.469 killing process with pid 380729 00:10:05.469 16:21:14 -- common/autotest_common.sh@955 -- # kill 380729 00:10:05.469 16:21:14 -- common/autotest_common.sh@960 -- # wait 380729 00:10:05.469 [2024-04-26 16:21:14.347475] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:05.728 16:21:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:05.728 16:21:14 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:05.728 00:10:05.728 real 0m57.350s 00:10:05.728 user 3m23.274s 00:10:05.728 sys 0m7.675s 00:10:05.728 16:21:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:05.728 16:21:14 -- common/autotest_common.sh@10 -- # set +x 00:10:05.728 ************************************ 00:10:05.728 END TEST nvmf_rpc 00:10:05.728 ************************************ 00:10:05.728 16:21:14 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:05.728 16:21:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:05.728 16:21:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.728 16:21:14 -- common/autotest_common.sh@10 -- # set +x 00:10:05.987 ************************************ 00:10:05.987 START TEST nvmf_invalid 00:10:05.987 ************************************ 00:10:05.987 16:21:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:05.987 * Looking for test storage... 00:10:05.987 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:05.987 16:21:14 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.987 16:21:14 -- nvmf/common.sh@7 -- # uname -s 00:10:05.987 16:21:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.987 16:21:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.987 16:21:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.987 16:21:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.987 16:21:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.987 16:21:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.987 16:21:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.987 16:21:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.987 16:21:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.987 16:21:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.987 16:21:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:10:05.987 16:21:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:10:05.987 16:21:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.987 16:21:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.987 16:21:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.987 16:21:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.987 16:21:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:05.987 16:21:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.987 16:21:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.987 16:21:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.988 16:21:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.988 16:21:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.988 16:21:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.988 16:21:14 -- paths/export.sh@5 -- # export PATH 00:10:05.988 16:21:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.988 16:21:14 -- nvmf/common.sh@47 -- # : 0 00:10:05.988 16:21:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:05.988 16:21:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:05.988 16:21:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.988 16:21:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.988 16:21:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.988 16:21:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:05.988 16:21:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:05.988 16:21:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:05.988 16:21:14 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:05.988 16:21:14 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:05.988 16:21:14 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:05.988 16:21:14 -- target/invalid.sh@14 -- # target=foobar 00:10:05.988 16:21:14 -- target/invalid.sh@16 -- # RANDOM=0 00:10:05.988 16:21:14 -- target/invalid.sh@34 -- # nvmftestinit 00:10:05.988 16:21:14 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:05.988 16:21:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.988 16:21:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:05.988 16:21:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:05.988 16:21:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:05.988 16:21:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.988 16:21:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.988 16:21:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.988 16:21:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:05.988 16:21:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:05.988 16:21:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:05.988 16:21:14 -- common/autotest_common.sh@10 -- # set +x 00:10:12.557 16:21:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:12.557 16:21:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.557 16:21:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.557 16:21:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.557 16:21:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.557 16:21:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.557 16:21:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.557 16:21:21 -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.557 16:21:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.557 16:21:21 -- nvmf/common.sh@296 -- # e810=() 00:10:12.557 16:21:21 -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.557 16:21:21 -- nvmf/common.sh@297 -- # x722=() 00:10:12.557 16:21:21 -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.557 16:21:21 -- nvmf/common.sh@298 -- # mlx=() 00:10:12.557 16:21:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.557 16:21:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.557 16:21:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.557 16:21:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.557 16:21:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.558 16:21:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.558 16:21:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.558 16:21:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.558 16:21:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.558 16:21:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.558 16:21:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.558 16:21:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.558 16:21:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.558 16:21:21 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:12.558 16:21:21 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:12.558 16:21:21 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:12.558 16:21:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.558 16:21:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:10:12.558 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:10:12.558 16:21:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:12.558 16:21:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:10:12.558 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:10:12.558 16:21:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:12.558 16:21:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.558 16:21:21 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.558 16:21:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:12.558 16:21:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.558 16:21:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:12.558 Found net devices under 0000:18:00.0: mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.558 16:21:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.558 16:21:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:12.558 16:21:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.558 16:21:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:12.558 Found net devices under 0000:18:00.1: mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.558 16:21:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:12.558 16:21:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:12.558 16:21:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:12.558 16:21:21 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:12.558 16:21:21 -- nvmf/common.sh@58 -- # uname 00:10:12.558 16:21:21 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:12.558 16:21:21 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:12.558 16:21:21 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:12.558 16:21:21 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:12.558 16:21:21 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:12.558 16:21:21 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:12.558 16:21:21 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:12.558 16:21:21 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:12.558 16:21:21 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:12.558 16:21:21 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:12.558 16:21:21 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:12.558 16:21:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:12.558 16:21:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:12.558 16:21:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:12.558 16:21:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:12.558 16:21:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:12.558 16:21:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@105 -- # continue 2 00:10:12.558 16:21:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@105 -- # continue 2 00:10:12.558 16:21:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:12.558 16:21:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.558 16:21:21 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:12.558 16:21:21 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:12.558 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:12.558 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:10:12.558 altname enp24s0f0np0 00:10:12.558 altname ens785f0np0 00:10:12.558 inet 192.168.100.8/24 scope global mlx_0_0 00:10:12.558 valid_lft forever preferred_lft forever 00:10:12.558 16:21:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:12.558 16:21:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.558 16:21:21 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:12.558 16:21:21 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:12.558 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:12.558 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:10:12.558 altname enp24s0f1np1 00:10:12.558 altname ens785f1np1 00:10:12.558 inet 192.168.100.9/24 scope global mlx_0_1 00:10:12.558 valid_lft forever preferred_lft forever 00:10:12.558 16:21:21 -- nvmf/common.sh@411 -- # return 0 00:10:12.558 16:21:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:12.558 16:21:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:12.558 16:21:21 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:12.558 16:21:21 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:12.558 16:21:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:12.558 16:21:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:12.558 16:21:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:12.558 16:21:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:12.558 16:21:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:12.558 16:21:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@105 -- # continue 2 00:10:12.558 16:21:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.558 16:21:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:12.558 16:21:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@105 -- # continue 2 00:10:12.558 16:21:21 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:12.558 16:21:21 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.558 16:21:21 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:12.558 16:21:21 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.558 16:21:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.558 16:21:21 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:12.558 192.168.100.9' 00:10:12.558 16:21:21 -- nvmf/common.sh@446 -- # head -n 1 00:10:12.558 16:21:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:12.558 192.168.100.9' 00:10:12.558 16:21:21 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:12.558 16:21:21 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:12.558 192.168.100.9' 00:10:12.558 16:21:21 -- nvmf/common.sh@447 -- # tail -n +2 00:10:12.558 16:21:21 -- nvmf/common.sh@447 -- # head -n 1 00:10:12.558 16:21:21 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:12.558 16:21:21 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:12.558 16:21:21 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:12.558 16:21:21 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:12.558 16:21:21 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:12.558 16:21:21 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:12.558 16:21:21 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:12.558 16:21:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:12.558 16:21:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:12.558 16:21:21 -- common/autotest_common.sh@10 -- # set +x 00:10:12.558 16:21:21 -- nvmf/common.sh@470 -- # nvmfpid=391181 00:10:12.558 16:21:21 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.558 16:21:21 -- nvmf/common.sh@471 -- # waitforlisten 391181 00:10:12.558 16:21:21 -- common/autotest_common.sh@817 -- # '[' -z 391181 ']' 00:10:12.558 16:21:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.558 16:21:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:12.558 16:21:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.558 16:21:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:12.558 16:21:21 -- common/autotest_common.sh@10 -- # set +x 00:10:12.558 [2024-04-26 16:21:21.425883] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:10:12.559 [2024-04-26 16:21:21.425935] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.559 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.559 [2024-04-26 16:21:21.496173] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.559 [2024-04-26 16:21:21.577429] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.559 [2024-04-26 16:21:21.577472] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.559 [2024-04-26 16:21:21.577482] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.559 [2024-04-26 16:21:21.577492] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.559 [2024-04-26 16:21:21.577499] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.559 [2024-04-26 16:21:21.577557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.559 [2024-04-26 16:21:21.577641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.559 [2024-04-26 16:21:21.577717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.559 [2024-04-26 16:21:21.577719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.496 16:21:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:13.496 16:21:22 -- common/autotest_common.sh@850 -- # return 0 00:10:13.496 16:21:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:13.496 16:21:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:13.496 16:21:22 -- common/autotest_common.sh@10 -- # set +x 00:10:13.496 16:21:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.496 16:21:22 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:13.496 16:21:22 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11386 00:10:13.496 [2024-04-26 16:21:22.455754] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:13.496 16:21:22 -- target/invalid.sh@40 -- # out='request: 00:10:13.496 { 00:10:13.496 "nqn": "nqn.2016-06.io.spdk:cnode11386", 00:10:13.496 "tgt_name": "foobar", 00:10:13.496 "method": "nvmf_create_subsystem", 00:10:13.496 "req_id": 1 00:10:13.496 } 00:10:13.496 Got JSON-RPC error response 00:10:13.496 response: 00:10:13.496 { 00:10:13.496 "code": -32603, 00:10:13.496 "message": "Unable to find target foobar" 00:10:13.496 }' 00:10:13.496 16:21:22 -- target/invalid.sh@41 -- # [[ request: 00:10:13.496 { 00:10:13.496 "nqn": "nqn.2016-06.io.spdk:cnode11386", 00:10:13.496 "tgt_name": "foobar", 00:10:13.496 "method": "nvmf_create_subsystem", 00:10:13.496 "req_id": 1 00:10:13.496 } 00:10:13.496 Got JSON-RPC error response 00:10:13.496 response: 00:10:13.496 { 00:10:13.496 "code": -32603, 00:10:13.496 "message": "Unable to find target foobar" 00:10:13.496 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:13.496 16:21:22 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:13.496 16:21:22 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19328 00:10:13.755 [2024-04-26 16:21:22.656479] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19328: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:13.755 16:21:22 -- target/invalid.sh@45 -- # out='request: 00:10:13.755 { 00:10:13.755 "nqn": "nqn.2016-06.io.spdk:cnode19328", 00:10:13.755 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:13.755 "method": "nvmf_create_subsystem", 00:10:13.755 "req_id": 1 00:10:13.755 } 00:10:13.755 Got JSON-RPC error response 00:10:13.755 response: 00:10:13.755 { 00:10:13.755 "code": -32602, 00:10:13.755 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:13.755 }' 00:10:13.755 16:21:22 -- target/invalid.sh@46 -- # [[ request: 00:10:13.755 { 00:10:13.755 "nqn": "nqn.2016-06.io.spdk:cnode19328", 00:10:13.755 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:13.755 "method": "nvmf_create_subsystem", 00:10:13.755 "req_id": 1 00:10:13.755 } 00:10:13.755 Got JSON-RPC error response 00:10:13.755 response: 00:10:13.755 { 00:10:13.755 "code": -32602, 00:10:13.755 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:13.755 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:13.755 16:21:22 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:13.755 16:21:22 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8186 00:10:14.015 [2024-04-26 16:21:22.849068] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8186: invalid model number 'SPDK_Controller' 00:10:14.015 16:21:22 -- target/invalid.sh@50 -- # out='request: 00:10:14.015 { 00:10:14.015 "nqn": "nqn.2016-06.io.spdk:cnode8186", 00:10:14.015 "model_number": "SPDK_Controller\u001f", 00:10:14.015 "method": "nvmf_create_subsystem", 00:10:14.015 "req_id": 1 00:10:14.015 } 00:10:14.015 Got JSON-RPC error response 00:10:14.015 response: 00:10:14.015 { 00:10:14.015 "code": -32602, 00:10:14.015 "message": "Invalid MN SPDK_Controller\u001f" 00:10:14.015 }' 00:10:14.015 16:21:22 -- target/invalid.sh@51 -- # [[ request: 00:10:14.015 { 00:10:14.015 "nqn": "nqn.2016-06.io.spdk:cnode8186", 00:10:14.015 "model_number": "SPDK_Controller\u001f", 00:10:14.015 "method": "nvmf_create_subsystem", 00:10:14.015 "req_id": 1 00:10:14.015 } 00:10:14.015 Got JSON-RPC error response 00:10:14.015 response: 00:10:14.015 { 00:10:14.015 "code": -32602, 00:10:14.015 "message": "Invalid MN SPDK_Controller\u001f" 00:10:14.015 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:14.015 16:21:22 -- target/invalid.sh@54 -- # gen_random_s 21 00:10:14.015 16:21:22 -- target/invalid.sh@19 -- # local length=21 ll 00:10:14.015 16:21:22 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:14.015 16:21:22 -- target/invalid.sh@21 -- # local chars 00:10:14.015 16:21:22 -- target/invalid.sh@22 -- # local string 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 93 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+=']' 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 96 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+='`' 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 108 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+=l 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 38 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+='&' 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 71 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+=G 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 100 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+=d 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 108 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+=l 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 63 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+='?' 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 100 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+=d 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 123 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+='{' 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # printf %x 82 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:14.015 16:21:22 -- target/invalid.sh@25 -- # string+=R 00:10:14.015 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # printf %x 88 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # string+=X 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # printf %x 44 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # string+=, 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # printf %x 100 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # string+=d 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # printf %x 87 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # string+=W 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # printf %x 43 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:14.016 16:21:22 -- target/invalid.sh@25 -- # string+=+ 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # printf %x 50 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # string+=2 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # printf %x 62 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # string+='>' 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # printf %x 104 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # string+=h 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # printf %x 80 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # string+=P 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # printf %x 80 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:14.016 16:21:23 -- target/invalid.sh@25 -- # string+=P 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.016 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.016 16:21:23 -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:10:14.016 16:21:23 -- target/invalid.sh@31 -- # echo ']`l&Gdl?d{RX,dW+2>hPP' 00:10:14.275 16:21:23 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ']`l&Gdl?d{RX,dW+2>hPP' nqn.2016-06.io.spdk:cnode15959 00:10:14.276 [2024-04-26 16:21:23.194213] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15959: invalid serial number ']`l&Gdl?d{RX,dW+2>hPP' 00:10:14.276 16:21:23 -- target/invalid.sh@54 -- # out='request: 00:10:14.276 { 00:10:14.276 "nqn": "nqn.2016-06.io.spdk:cnode15959", 00:10:14.276 "serial_number": "]`l&Gdl?d{RX,dW+2>hPP", 00:10:14.276 "method": "nvmf_create_subsystem", 00:10:14.276 "req_id": 1 00:10:14.276 } 00:10:14.276 Got JSON-RPC error response 00:10:14.276 response: 00:10:14.276 { 00:10:14.276 "code": -32602, 00:10:14.276 "message": "Invalid SN ]`l&Gdl?d{RX,dW+2>hPP" 00:10:14.276 }' 00:10:14.276 16:21:23 -- target/invalid.sh@55 -- # [[ request: 00:10:14.276 { 00:10:14.276 "nqn": "nqn.2016-06.io.spdk:cnode15959", 00:10:14.276 "serial_number": "]`l&Gdl?d{RX,dW+2>hPP", 00:10:14.276 "method": "nvmf_create_subsystem", 00:10:14.276 "req_id": 1 00:10:14.276 } 00:10:14.276 Got JSON-RPC error response 00:10:14.276 response: 00:10:14.276 { 00:10:14.276 "code": -32602, 00:10:14.276 "message": "Invalid SN ]`l&Gdl?d{RX,dW+2>hPP" 00:10:14.276 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:14.276 16:21:23 -- target/invalid.sh@58 -- # gen_random_s 41 00:10:14.276 16:21:23 -- target/invalid.sh@19 -- # local length=41 ll 00:10:14.276 16:21:23 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:14.276 16:21:23 -- target/invalid.sh@21 -- # local chars 00:10:14.276 16:21:23 -- target/invalid.sh@22 -- # local string 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 58 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # string+=: 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 110 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # string+=n 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 71 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # string+=G 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 124 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # string+='|' 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 74 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # string+=J 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 40 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # string+='(' 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 91 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # string+='[' 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 111 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # string+=o 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.276 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # printf %x 117 00:10:14.276 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=u 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 41 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=')' 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 43 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=+ 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 40 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+='(' 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 106 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=j 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 78 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=N 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 103 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=g 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 71 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=G 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 80 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=P 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 34 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+='"' 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 112 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=p 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 69 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=E 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 75 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=K 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 108 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=l 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 100 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=d 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 72 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=H 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 84 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=T 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 82 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x52' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=R 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 55 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=7 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.535 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # printf %x 43 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:14.535 16:21:23 -- target/invalid.sh@25 -- # string+=+ 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 108 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=l 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 113 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=q 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 89 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=Y 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 83 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=S 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 89 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=Y 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 72 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=H 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 32 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=' ' 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 71 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=G 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 48 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=0 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 84 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=T 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 62 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+='>' 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 109 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=m 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # printf %x 73 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:14.536 16:21:23 -- target/invalid.sh@25 -- # string+=I 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll++ )) 00:10:14.536 16:21:23 -- target/invalid.sh@24 -- # (( ll < length )) 00:10:14.536 16:21:23 -- target/invalid.sh@28 -- # [[ : == \- ]] 00:10:14.536 16:21:23 -- target/invalid.sh@31 -- # echo ':nG|J([ou)+(jNgGP"pEKldHTR7+lqYSYH G0T>mI' 00:10:14.536 16:21:23 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ':nG|J([ou)+(jNgGP"pEKldHTR7+lqYSYH G0T>mI' nqn.2016-06.io.spdk:cnode64 00:10:14.794 [2024-04-26 16:21:23.703876] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode64: invalid model number ':nG|J([ou)+(jNgGP"pEKldHTR7+lqYSYH G0T>mI' 00:10:14.794 16:21:23 -- target/invalid.sh@58 -- # out='request: 00:10:14.794 { 00:10:14.794 "nqn": "nqn.2016-06.io.spdk:cnode64", 00:10:14.794 "model_number": ":nG|J([ou)+(jNgGP\"pEKldHTR7+lqYSYH G0T>mI", 00:10:14.794 "method": "nvmf_create_subsystem", 00:10:14.794 "req_id": 1 00:10:14.794 } 00:10:14.794 Got JSON-RPC error response 00:10:14.794 response: 00:10:14.794 { 00:10:14.794 "code": -32602, 00:10:14.794 "message": "Invalid MN :nG|J([ou)+(jNgGP\"pEKldHTR7+lqYSYH G0T>mI" 00:10:14.794 }' 00:10:14.794 16:21:23 -- target/invalid.sh@59 -- # [[ request: 00:10:14.794 { 00:10:14.794 "nqn": "nqn.2016-06.io.spdk:cnode64", 00:10:14.794 "model_number": ":nG|J([ou)+(jNgGP\"pEKldHTR7+lqYSYH G0T>mI", 00:10:14.795 "method": "nvmf_create_subsystem", 00:10:14.795 "req_id": 1 00:10:14.795 } 00:10:14.795 Got JSON-RPC error response 00:10:14.795 response: 00:10:14.795 { 00:10:14.795 "code": -32602, 00:10:14.795 "message": "Invalid MN :nG|J([ou)+(jNgGP\"pEKldHTR7+lqYSYH G0T>mI" 00:10:14.795 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:14.795 16:21:23 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:10:15.053 [2024-04-26 16:21:23.921939] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21399d0/0x213dec0) succeed. 00:10:15.053 [2024-04-26 16:21:23.932259] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x213b010/0x217f550) succeed. 00:10:15.312 16:21:24 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:15.312 16:21:24 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:10:15.312 16:21:24 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:10:15.312 192.168.100.9' 00:10:15.312 16:21:24 -- target/invalid.sh@67 -- # head -n 1 00:10:15.312 16:21:24 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:10:15.312 16:21:24 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:10:15.570 [2024-04-26 16:21:24.434875] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:15.571 16:21:24 -- target/invalid.sh@69 -- # out='request: 00:10:15.571 { 00:10:15.571 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:15.571 "listen_address": { 00:10:15.571 "trtype": "rdma", 00:10:15.571 "traddr": "192.168.100.8", 00:10:15.571 "trsvcid": "4421" 00:10:15.571 }, 00:10:15.571 "method": "nvmf_subsystem_remove_listener", 00:10:15.571 "req_id": 1 00:10:15.571 } 00:10:15.571 Got JSON-RPC error response 00:10:15.571 response: 00:10:15.571 { 00:10:15.571 "code": -32602, 00:10:15.571 "message": "Invalid parameters" 00:10:15.571 }' 00:10:15.571 16:21:24 -- target/invalid.sh@70 -- # [[ request: 00:10:15.571 { 00:10:15.571 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:15.571 "listen_address": { 00:10:15.571 "trtype": "rdma", 00:10:15.571 "traddr": "192.168.100.8", 00:10:15.571 "trsvcid": "4421" 00:10:15.571 }, 00:10:15.571 "method": "nvmf_subsystem_remove_listener", 00:10:15.571 "req_id": 1 00:10:15.571 } 00:10:15.571 Got JSON-RPC error response 00:10:15.571 response: 00:10:15.571 { 00:10:15.571 "code": -32602, 00:10:15.571 "message": "Invalid parameters" 00:10:15.571 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:15.571 16:21:24 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19381 -i 0 00:10:15.830 [2024-04-26 16:21:24.623506] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19381: invalid cntlid range [0-65519] 00:10:15.830 16:21:24 -- target/invalid.sh@73 -- # out='request: 00:10:15.830 { 00:10:15.830 "nqn": "nqn.2016-06.io.spdk:cnode19381", 00:10:15.830 "min_cntlid": 0, 00:10:15.830 "method": "nvmf_create_subsystem", 00:10:15.830 "req_id": 1 00:10:15.831 } 00:10:15.831 Got JSON-RPC error response 00:10:15.831 response: 00:10:15.831 { 00:10:15.831 "code": -32602, 00:10:15.831 "message": "Invalid cntlid range [0-65519]" 00:10:15.831 }' 00:10:15.831 16:21:24 -- target/invalid.sh@74 -- # [[ request: 00:10:15.831 { 00:10:15.831 "nqn": "nqn.2016-06.io.spdk:cnode19381", 00:10:15.831 "min_cntlid": 0, 00:10:15.831 "method": "nvmf_create_subsystem", 00:10:15.831 "req_id": 1 00:10:15.831 } 00:10:15.831 Got JSON-RPC error response 00:10:15.831 response: 00:10:15.831 { 00:10:15.831 "code": -32602, 00:10:15.831 "message": "Invalid cntlid range [0-65519]" 00:10:15.831 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:15.831 16:21:24 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27316 -i 65520 00:10:15.831 [2024-04-26 16:21:24.796088] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27316: invalid cntlid range [65520-65519] 00:10:15.831 16:21:24 -- target/invalid.sh@75 -- # out='request: 00:10:15.831 { 00:10:15.831 "nqn": "nqn.2016-06.io.spdk:cnode27316", 00:10:15.831 "min_cntlid": 65520, 00:10:15.831 "method": "nvmf_create_subsystem", 00:10:15.831 "req_id": 1 00:10:15.831 } 00:10:15.831 Got JSON-RPC error response 00:10:15.831 response: 00:10:15.831 { 00:10:15.831 "code": -32602, 00:10:15.831 "message": "Invalid cntlid range [65520-65519]" 00:10:15.831 }' 00:10:15.831 16:21:24 -- target/invalid.sh@76 -- # [[ request: 00:10:15.831 { 00:10:15.831 "nqn": "nqn.2016-06.io.spdk:cnode27316", 00:10:15.831 "min_cntlid": 65520, 00:10:15.831 "method": "nvmf_create_subsystem", 00:10:15.831 "req_id": 1 00:10:15.831 } 00:10:15.831 Got JSON-RPC error response 00:10:15.831 response: 00:10:15.831 { 00:10:15.831 "code": -32602, 00:10:15.831 "message": "Invalid cntlid range [65520-65519]" 00:10:15.831 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:15.831 16:21:24 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29220 -I 0 00:10:16.090 [2024-04-26 16:21:24.980738] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29220: invalid cntlid range [1-0] 00:10:16.090 16:21:25 -- target/invalid.sh@77 -- # out='request: 00:10:16.090 { 00:10:16.091 "nqn": "nqn.2016-06.io.spdk:cnode29220", 00:10:16.091 "max_cntlid": 0, 00:10:16.091 "method": "nvmf_create_subsystem", 00:10:16.091 "req_id": 1 00:10:16.091 } 00:10:16.091 Got JSON-RPC error response 00:10:16.091 response: 00:10:16.091 { 00:10:16.091 "code": -32602, 00:10:16.091 "message": "Invalid cntlid range [1-0]" 00:10:16.091 }' 00:10:16.091 16:21:25 -- target/invalid.sh@78 -- # [[ request: 00:10:16.091 { 00:10:16.091 "nqn": "nqn.2016-06.io.spdk:cnode29220", 00:10:16.091 "max_cntlid": 0, 00:10:16.091 "method": "nvmf_create_subsystem", 00:10:16.091 "req_id": 1 00:10:16.091 } 00:10:16.091 Got JSON-RPC error response 00:10:16.091 response: 00:10:16.091 { 00:10:16.091 "code": -32602, 00:10:16.091 "message": "Invalid cntlid range [1-0]" 00:10:16.091 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:16.091 16:21:25 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30135 -I 65520 00:10:16.350 [2024-04-26 16:21:25.173452] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30135: invalid cntlid range [1-65520] 00:10:16.350 16:21:25 -- target/invalid.sh@79 -- # out='request: 00:10:16.350 { 00:10:16.350 "nqn": "nqn.2016-06.io.spdk:cnode30135", 00:10:16.350 "max_cntlid": 65520, 00:10:16.350 "method": "nvmf_create_subsystem", 00:10:16.350 "req_id": 1 00:10:16.350 } 00:10:16.350 Got JSON-RPC error response 00:10:16.350 response: 00:10:16.350 { 00:10:16.350 "code": -32602, 00:10:16.350 "message": "Invalid cntlid range [1-65520]" 00:10:16.350 }' 00:10:16.350 16:21:25 -- target/invalid.sh@80 -- # [[ request: 00:10:16.350 { 00:10:16.350 "nqn": "nqn.2016-06.io.spdk:cnode30135", 00:10:16.350 "max_cntlid": 65520, 00:10:16.350 "method": "nvmf_create_subsystem", 00:10:16.350 "req_id": 1 00:10:16.350 } 00:10:16.350 Got JSON-RPC error response 00:10:16.350 response: 00:10:16.350 { 00:10:16.350 "code": -32602, 00:10:16.350 "message": "Invalid cntlid range [1-65520]" 00:10:16.350 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:16.350 16:21:25 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1243 -i 6 -I 5 00:10:16.609 [2024-04-26 16:21:25.378214] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1243: invalid cntlid range [6-5] 00:10:16.609 16:21:25 -- target/invalid.sh@83 -- # out='request: 00:10:16.609 { 00:10:16.609 "nqn": "nqn.2016-06.io.spdk:cnode1243", 00:10:16.609 "min_cntlid": 6, 00:10:16.609 "max_cntlid": 5, 00:10:16.609 "method": "nvmf_create_subsystem", 00:10:16.609 "req_id": 1 00:10:16.609 } 00:10:16.609 Got JSON-RPC error response 00:10:16.609 response: 00:10:16.609 { 00:10:16.609 "code": -32602, 00:10:16.609 "message": "Invalid cntlid range [6-5]" 00:10:16.609 }' 00:10:16.609 16:21:25 -- target/invalid.sh@84 -- # [[ request: 00:10:16.609 { 00:10:16.609 "nqn": "nqn.2016-06.io.spdk:cnode1243", 00:10:16.609 "min_cntlid": 6, 00:10:16.610 "max_cntlid": 5, 00:10:16.610 "method": "nvmf_create_subsystem", 00:10:16.610 "req_id": 1 00:10:16.610 } 00:10:16.610 Got JSON-RPC error response 00:10:16.610 response: 00:10:16.610 { 00:10:16.610 "code": -32602, 00:10:16.610 "message": "Invalid cntlid range [6-5]" 00:10:16.610 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:16.610 16:21:25 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:16.610 16:21:25 -- target/invalid.sh@87 -- # out='request: 00:10:16.610 { 00:10:16.610 "name": "foobar", 00:10:16.610 "method": "nvmf_delete_target", 00:10:16.610 "req_id": 1 00:10:16.610 } 00:10:16.610 Got JSON-RPC error response 00:10:16.610 response: 00:10:16.610 { 00:10:16.610 "code": -32602, 00:10:16.610 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:16.610 }' 00:10:16.610 16:21:25 -- target/invalid.sh@88 -- # [[ request: 00:10:16.610 { 00:10:16.610 "name": "foobar", 00:10:16.610 "method": "nvmf_delete_target", 00:10:16.610 "req_id": 1 00:10:16.610 } 00:10:16.610 Got JSON-RPC error response 00:10:16.610 response: 00:10:16.610 { 00:10:16.610 "code": -32602, 00:10:16.610 "message": "The specified target doesn't exist, cannot delete it." 00:10:16.610 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:16.610 16:21:25 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:16.610 16:21:25 -- target/invalid.sh@91 -- # nvmftestfini 00:10:16.610 16:21:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:16.610 16:21:25 -- nvmf/common.sh@117 -- # sync 00:10:16.610 16:21:25 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:16.610 16:21:25 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:16.610 16:21:25 -- nvmf/common.sh@120 -- # set +e 00:10:16.610 16:21:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.610 16:21:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:16.610 rmmod nvme_rdma 00:10:16.610 rmmod nvme_fabrics 00:10:16.610 16:21:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.610 16:21:25 -- nvmf/common.sh@124 -- # set -e 00:10:16.610 16:21:25 -- nvmf/common.sh@125 -- # return 0 00:10:16.610 16:21:25 -- nvmf/common.sh@478 -- # '[' -n 391181 ']' 00:10:16.610 16:21:25 -- nvmf/common.sh@479 -- # killprocess 391181 00:10:16.610 16:21:25 -- common/autotest_common.sh@936 -- # '[' -z 391181 ']' 00:10:16.610 16:21:25 -- common/autotest_common.sh@940 -- # kill -0 391181 00:10:16.610 16:21:25 -- common/autotest_common.sh@941 -- # uname 00:10:16.610 16:21:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:16.610 16:21:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 391181 00:10:16.610 16:21:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:16.610 16:21:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:16.610 16:21:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 391181' 00:10:16.610 killing process with pid 391181 00:10:16.610 16:21:25 -- common/autotest_common.sh@955 -- # kill 391181 00:10:16.610 16:21:25 -- common/autotest_common.sh@960 -- # wait 391181 00:10:16.870 [2024-04-26 16:21:25.704702] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:17.129 16:21:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:17.129 16:21:25 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:17.129 00:10:17.129 real 0m11.131s 00:10:17.129 user 0m21.070s 00:10:17.129 sys 0m6.258s 00:10:17.129 16:21:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:17.129 16:21:25 -- common/autotest_common.sh@10 -- # set +x 00:10:17.129 ************************************ 00:10:17.129 END TEST nvmf_invalid 00:10:17.129 ************************************ 00:10:17.129 16:21:25 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:10:17.129 16:21:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:17.129 16:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:17.129 16:21:25 -- common/autotest_common.sh@10 -- # set +x 00:10:17.129 ************************************ 00:10:17.129 START TEST nvmf_abort 00:10:17.129 ************************************ 00:10:17.129 16:21:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:10:17.388 * Looking for test storage... 00:10:17.388 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:17.388 16:21:26 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.388 16:21:26 -- nvmf/common.sh@7 -- # uname -s 00:10:17.388 16:21:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.388 16:21:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.388 16:21:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.388 16:21:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.388 16:21:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.388 16:21:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.388 16:21:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.388 16:21:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.388 16:21:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.388 16:21:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.388 16:21:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:10:17.388 16:21:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:10:17.388 16:21:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.388 16:21:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.388 16:21:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.388 16:21:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.388 16:21:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:17.388 16:21:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.388 16:21:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.388 16:21:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.388 16:21:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.388 16:21:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.388 16:21:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.388 16:21:26 -- paths/export.sh@5 -- # export PATH 00:10:17.389 16:21:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.389 16:21:26 -- nvmf/common.sh@47 -- # : 0 00:10:17.389 16:21:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.389 16:21:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.389 16:21:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.389 16:21:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.389 16:21:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.389 16:21:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.389 16:21:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.389 16:21:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.389 16:21:26 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.389 16:21:26 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:17.389 16:21:26 -- target/abort.sh@14 -- # nvmftestinit 00:10:17.389 16:21:26 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:17.389 16:21:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.389 16:21:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:17.389 16:21:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:17.389 16:21:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:17.389 16:21:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.389 16:21:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.389 16:21:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.389 16:21:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:17.389 16:21:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:17.389 16:21:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:17.389 16:21:26 -- common/autotest_common.sh@10 -- # set +x 00:10:23.960 16:21:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:23.960 16:21:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:23.960 16:21:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:23.960 16:21:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:23.960 16:21:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:23.960 16:21:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:23.960 16:21:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:23.960 16:21:32 -- nvmf/common.sh@295 -- # net_devs=() 00:10:23.960 16:21:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:23.960 16:21:32 -- nvmf/common.sh@296 -- # e810=() 00:10:23.960 16:21:32 -- nvmf/common.sh@296 -- # local -ga e810 00:10:23.960 16:21:32 -- nvmf/common.sh@297 -- # x722=() 00:10:23.960 16:21:32 -- nvmf/common.sh@297 -- # local -ga x722 00:10:23.960 16:21:32 -- nvmf/common.sh@298 -- # mlx=() 00:10:23.960 16:21:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:23.960 16:21:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.960 16:21:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:23.960 16:21:32 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:23.960 16:21:32 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:23.960 16:21:32 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:23.960 16:21:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:23.960 16:21:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:23.960 16:21:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:10:23.960 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:10:23.960 16:21:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:23.960 16:21:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:23.960 16:21:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:10:23.960 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:10:23.960 16:21:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:23.960 16:21:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:23.960 16:21:32 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:23.960 16:21:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.960 16:21:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:23.960 16:21:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.960 16:21:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:23.960 Found net devices under 0000:18:00.0: mlx_0_0 00:10:23.960 16:21:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.960 16:21:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:23.960 16:21:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.960 16:21:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:23.960 16:21:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.960 16:21:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:23.960 Found net devices under 0000:18:00.1: mlx_0_1 00:10:23.960 16:21:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.960 16:21:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:23.960 16:21:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:23.960 16:21:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:23.960 16:21:32 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:23.960 16:21:32 -- nvmf/common.sh@58 -- # uname 00:10:23.960 16:21:32 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:23.960 16:21:32 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:23.960 16:21:32 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:23.960 16:21:32 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:23.960 16:21:32 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:23.960 16:21:32 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:23.960 16:21:32 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:23.960 16:21:32 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:23.960 16:21:32 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:23.960 16:21:32 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:23.960 16:21:32 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:23.960 16:21:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:23.960 16:21:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:23.960 16:21:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:23.960 16:21:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:23.960 16:21:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:23.960 16:21:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:23.960 16:21:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.960 16:21:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:23.960 16:21:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:23.960 16:21:32 -- nvmf/common.sh@105 -- # continue 2 00:10:23.960 16:21:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:23.960 16:21:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.960 16:21:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:23.961 16:21:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.961 16:21:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:23.961 16:21:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:23.961 16:21:32 -- nvmf/common.sh@105 -- # continue 2 00:10:23.961 16:21:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:23.961 16:21:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:23.961 16:21:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:23.961 16:21:32 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:23.961 16:21:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:23.961 16:21:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:23.961 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:23.961 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:10:23.961 altname enp24s0f0np0 00:10:23.961 altname ens785f0np0 00:10:23.961 inet 192.168.100.8/24 scope global mlx_0_0 00:10:23.961 valid_lft forever preferred_lft forever 00:10:23.961 16:21:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:23.961 16:21:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:23.961 16:21:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:23.961 16:21:32 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:23.961 16:21:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:23.961 16:21:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:23.961 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:23.961 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:10:23.961 altname enp24s0f1np1 00:10:23.961 altname ens785f1np1 00:10:23.961 inet 192.168.100.9/24 scope global mlx_0_1 00:10:23.961 valid_lft forever preferred_lft forever 00:10:23.961 16:21:32 -- nvmf/common.sh@411 -- # return 0 00:10:23.961 16:21:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:23.961 16:21:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:23.961 16:21:32 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:23.961 16:21:32 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:23.961 16:21:32 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:23.961 16:21:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:23.961 16:21:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:23.961 16:21:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:23.961 16:21:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:23.961 16:21:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:23.961 16:21:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:23.961 16:21:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.961 16:21:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:23.961 16:21:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:23.961 16:21:32 -- nvmf/common.sh@105 -- # continue 2 00:10:23.961 16:21:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:23.961 16:21:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.961 16:21:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:23.961 16:21:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:23.961 16:21:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:23.961 16:21:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:23.961 16:21:32 -- nvmf/common.sh@105 -- # continue 2 00:10:23.961 16:21:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:23.961 16:21:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:23.961 16:21:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:23.961 16:21:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:23.961 16:21:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:23.961 16:21:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:23.961 16:21:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:23.961 16:21:32 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:23.961 192.168.100.9' 00:10:23.961 16:21:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:23.961 192.168.100.9' 00:10:23.961 16:21:32 -- nvmf/common.sh@446 -- # head -n 1 00:10:23.961 16:21:32 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:23.961 16:21:32 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:23.961 192.168.100.9' 00:10:23.961 16:21:32 -- nvmf/common.sh@447 -- # head -n 1 00:10:23.961 16:21:32 -- nvmf/common.sh@447 -- # tail -n +2 00:10:23.961 16:21:32 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:23.961 16:21:32 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:23.961 16:21:32 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:23.961 16:21:32 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:23.961 16:21:32 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:23.961 16:21:32 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:24.221 16:21:32 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:24.221 16:21:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:24.221 16:21:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:24.221 16:21:32 -- common/autotest_common.sh@10 -- # set +x 00:10:24.221 16:21:33 -- nvmf/common.sh@470 -- # nvmfpid=394844 00:10:24.221 16:21:33 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:24.221 16:21:33 -- nvmf/common.sh@471 -- # waitforlisten 394844 00:10:24.221 16:21:33 -- common/autotest_common.sh@817 -- # '[' -z 394844 ']' 00:10:24.221 16:21:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.221 16:21:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:24.221 16:21:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.221 16:21:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:24.221 16:21:33 -- common/autotest_common.sh@10 -- # set +x 00:10:24.221 [2024-04-26 16:21:33.052699] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:10:24.221 [2024-04-26 16:21:33.052757] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.221 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.221 [2024-04-26 16:21:33.126086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:24.221 [2024-04-26 16:21:33.212137] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.221 [2024-04-26 16:21:33.212185] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.221 [2024-04-26 16:21:33.212194] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.221 [2024-04-26 16:21:33.212218] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.221 [2024-04-26 16:21:33.212226] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.221 [2024-04-26 16:21:33.212342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.221 [2024-04-26 16:21:33.212422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.221 [2024-04-26 16:21:33.212424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.157 16:21:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:25.158 16:21:33 -- common/autotest_common.sh@850 -- # return 0 00:10:25.158 16:21:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:25.158 16:21:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:25.158 16:21:33 -- common/autotest_common.sh@10 -- # set +x 00:10:25.158 16:21:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.158 16:21:33 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:10:25.158 16:21:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.158 16:21:33 -- common/autotest_common.sh@10 -- # set +x 00:10:25.158 [2024-04-26 16:21:33.943601] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a7eb30/0x1a83020) succeed. 00:10:25.158 [2024-04-26 16:21:33.953813] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a800d0/0x1ac46b0) succeed. 00:10:25.158 16:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.158 16:21:34 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:25.158 16:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.158 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:10:25.158 Malloc0 00:10:25.158 16:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.158 16:21:34 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:25.158 16:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.158 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:10:25.158 Delay0 00:10:25.158 16:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.158 16:21:34 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:25.158 16:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.158 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:10:25.158 16:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.158 16:21:34 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:25.158 16:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.158 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:10:25.158 16:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.158 16:21:34 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:10:25.158 16:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.158 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:10:25.158 [2024-04-26 16:21:34.116356] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:25.158 16:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.158 16:21:34 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:25.158 16:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:25.158 16:21:34 -- common/autotest_common.sh@10 -- # set +x 00:10:25.158 16:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:25.158 16:21:34 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:25.158 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.417 [2024-04-26 16:21:34.210386] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:27.319 Initializing NVMe Controllers 00:10:27.319 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:10:27.319 controller IO queue size 128 less than required 00:10:27.319 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:27.319 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:27.319 Initialization complete. Launching workers. 00:10:27.319 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 50620 00:10:27.319 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 50681, failed to submit 62 00:10:27.319 success 50621, unsuccess 60, failed 0 00:10:27.319 16:21:36 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:27.319 16:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.319 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:10:27.319 16:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:27.319 16:21:36 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:27.319 16:21:36 -- target/abort.sh@38 -- # nvmftestfini 00:10:27.319 16:21:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:27.319 16:21:36 -- nvmf/common.sh@117 -- # sync 00:10:27.319 16:21:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:27.319 16:21:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:27.319 16:21:36 -- nvmf/common.sh@120 -- # set +e 00:10:27.319 16:21:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.319 16:21:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:27.319 rmmod nvme_rdma 00:10:27.578 rmmod nvme_fabrics 00:10:27.578 16:21:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.578 16:21:36 -- nvmf/common.sh@124 -- # set -e 00:10:27.578 16:21:36 -- nvmf/common.sh@125 -- # return 0 00:10:27.578 16:21:36 -- nvmf/common.sh@478 -- # '[' -n 394844 ']' 00:10:27.578 16:21:36 -- nvmf/common.sh@479 -- # killprocess 394844 00:10:27.578 16:21:36 -- common/autotest_common.sh@936 -- # '[' -z 394844 ']' 00:10:27.578 16:21:36 -- common/autotest_common.sh@940 -- # kill -0 394844 00:10:27.578 16:21:36 -- common/autotest_common.sh@941 -- # uname 00:10:27.578 16:21:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:27.578 16:21:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 394844 00:10:27.578 16:21:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:27.578 16:21:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:27.578 16:21:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 394844' 00:10:27.578 killing process with pid 394844 00:10:27.578 16:21:36 -- common/autotest_common.sh@955 -- # kill 394844 00:10:27.578 16:21:36 -- common/autotest_common.sh@960 -- # wait 394844 00:10:27.578 [2024-04-26 16:21:36.501477] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:10:27.836 16:21:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:27.836 16:21:36 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:10:27.836 00:10:27.836 real 0m10.619s 00:10:27.836 user 0m14.538s 00:10:27.836 sys 0m5.650s 00:10:27.836 16:21:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:27.836 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:10:27.836 ************************************ 00:10:27.836 END TEST nvmf_abort 00:10:27.836 ************************************ 00:10:27.836 16:21:36 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:27.836 16:21:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:27.836 16:21:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:27.836 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:10:28.095 ************************************ 00:10:28.095 START TEST nvmf_ns_hotplug_stress 00:10:28.095 ************************************ 00:10:28.095 16:21:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:10:28.095 * Looking for test storage... 00:10:28.095 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:28.095 16:21:37 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.095 16:21:37 -- nvmf/common.sh@7 -- # uname -s 00:10:28.095 16:21:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.095 16:21:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.095 16:21:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.095 16:21:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.095 16:21:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.095 16:21:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.095 16:21:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.095 16:21:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.095 16:21:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.095 16:21:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.095 16:21:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:10:28.095 16:21:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:10:28.095 16:21:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.095 16:21:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.095 16:21:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.095 16:21:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.095 16:21:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:28.095 16:21:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.095 16:21:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.095 16:21:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.095 16:21:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.095 16:21:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.095 16:21:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.095 16:21:37 -- paths/export.sh@5 -- # export PATH 00:10:28.095 16:21:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.095 16:21:37 -- nvmf/common.sh@47 -- # : 0 00:10:28.095 16:21:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.095 16:21:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.095 16:21:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.095 16:21:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.095 16:21:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.095 16:21:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.095 16:21:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.095 16:21:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.095 16:21:37 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:28.095 16:21:37 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:10:28.095 16:21:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:10:28.095 16:21:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.095 16:21:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:28.095 16:21:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:28.095 16:21:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:28.095 16:21:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.095 16:21:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.095 16:21:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.095 16:21:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:28.095 16:21:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:28.095 16:21:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:28.095 16:21:37 -- common/autotest_common.sh@10 -- # set +x 00:10:34.661 16:21:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:34.661 16:21:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.661 16:21:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.661 16:21:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.661 16:21:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.661 16:21:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.661 16:21:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.661 16:21:42 -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.661 16:21:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.661 16:21:42 -- nvmf/common.sh@296 -- # e810=() 00:10:34.661 16:21:42 -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.661 16:21:42 -- nvmf/common.sh@297 -- # x722=() 00:10:34.661 16:21:42 -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.661 16:21:42 -- nvmf/common.sh@298 -- # mlx=() 00:10:34.661 16:21:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.661 16:21:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.661 16:21:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.661 16:21:42 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:34.661 16:21:42 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:34.661 16:21:42 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:34.661 16:21:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.661 16:21:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:10:34.661 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:10:34.661 16:21:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.661 16:21:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:10:34.661 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:10:34.661 16:21:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:34.661 16:21:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.661 16:21:42 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.661 16:21:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:34.661 16:21:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.661 16:21:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:34.661 Found net devices under 0000:18:00.0: mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.661 16:21:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.661 16:21:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:34.661 16:21:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.661 16:21:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:34.661 Found net devices under 0000:18:00.1: mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.661 16:21:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:34.661 16:21:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:34.661 16:21:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@409 -- # rdma_device_init 00:10:34.661 16:21:42 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:10:34.661 16:21:42 -- nvmf/common.sh@58 -- # uname 00:10:34.661 16:21:42 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:34.661 16:21:42 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:34.661 16:21:42 -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:34.661 16:21:42 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:34.661 16:21:42 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:34.661 16:21:42 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:34.661 16:21:42 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:34.661 16:21:42 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:34.661 16:21:42 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:10:34.661 16:21:42 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:34.661 16:21:42 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:34.661 16:21:42 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.661 16:21:42 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:34.661 16:21:42 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:34.661 16:21:42 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.661 16:21:42 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:34.661 16:21:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@105 -- # continue 2 00:10:34.661 16:21:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@105 -- # continue 2 00:10:34.661 16:21:42 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:34.661 16:21:42 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.661 16:21:42 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:34.661 16:21:42 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:34.661 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.661 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:10:34.661 altname enp24s0f0np0 00:10:34.661 altname ens785f0np0 00:10:34.661 inet 192.168.100.8/24 scope global mlx_0_0 00:10:34.661 valid_lft forever preferred_lft forever 00:10:34.661 16:21:42 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:34.661 16:21:42 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.661 16:21:42 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:34.661 16:21:42 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:34.661 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:34.661 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:10:34.661 altname enp24s0f1np1 00:10:34.661 altname ens785f1np1 00:10:34.661 inet 192.168.100.9/24 scope global mlx_0_1 00:10:34.661 valid_lft forever preferred_lft forever 00:10:34.661 16:21:42 -- nvmf/common.sh@411 -- # return 0 00:10:34.661 16:21:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:34.661 16:21:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:34.661 16:21:42 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:10:34.661 16:21:42 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:34.661 16:21:42 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:34.661 16:21:42 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:34.661 16:21:42 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:34.661 16:21:42 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:34.661 16:21:42 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:34.661 16:21:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@105 -- # continue 2 00:10:34.661 16:21:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:34.661 16:21:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:34.661 16:21:42 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@105 -- # continue 2 00:10:34.661 16:21:42 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:34.661 16:21:42 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.661 16:21:42 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:34.661 16:21:42 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:34.661 16:21:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:34.661 16:21:42 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:10:34.661 192.168.100.9' 00:10:34.661 16:21:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:10:34.661 192.168.100.9' 00:10:34.661 16:21:42 -- nvmf/common.sh@446 -- # head -n 1 00:10:34.661 16:21:42 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:34.661 16:21:42 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:10:34.661 192.168.100.9' 00:10:34.661 16:21:42 -- nvmf/common.sh@447 -- # tail -n +2 00:10:34.661 16:21:42 -- nvmf/common.sh@447 -- # head -n 1 00:10:34.661 16:21:42 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:34.661 16:21:42 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:10:34.661 16:21:42 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:34.661 16:21:42 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:10:34.661 16:21:42 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:10:34.661 16:21:42 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:10:34.661 16:21:42 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:10:34.661 16:21:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:34.661 16:21:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:34.661 16:21:42 -- common/autotest_common.sh@10 -- # set +x 00:10:34.661 16:21:42 -- nvmf/common.sh@470 -- # nvmfpid=398207 00:10:34.661 16:21:42 -- nvmf/common.sh@471 -- # waitforlisten 398207 00:10:34.661 16:21:42 -- common/autotest_common.sh@817 -- # '[' -z 398207 ']' 00:10:34.661 16:21:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.661 16:21:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:34.661 16:21:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.661 16:21:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:34.661 16:21:42 -- common/autotest_common.sh@10 -- # set +x 00:10:34.661 16:21:42 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:34.661 [2024-04-26 16:21:42.897466] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:10:34.661 [2024-04-26 16:21:42.897523] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.661 EAL: No free 2048 kB hugepages reported on node 1 00:10:34.661 [2024-04-26 16:21:42.972998] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:34.661 [2024-04-26 16:21:43.048951] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.661 [2024-04-26 16:21:43.048999] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.661 [2024-04-26 16:21:43.049008] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.661 [2024-04-26 16:21:43.049033] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.661 [2024-04-26 16:21:43.049040] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.661 [2024-04-26 16:21:43.049141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.661 [2024-04-26 16:21:43.049226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.661 [2024-04-26 16:21:43.049228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.920 16:21:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:34.920 16:21:43 -- common/autotest_common.sh@850 -- # return 0 00:10:34.920 16:21:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:34.920 16:21:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:34.920 16:21:43 -- common/autotest_common.sh@10 -- # set +x 00:10:34.920 16:21:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.920 16:21:43 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:10:34.920 16:21:43 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:34.920 [2024-04-26 16:21:43.936086] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6dab30/0x6df020) succeed. 00:10:35.179 [2024-04-26 16:21:43.946384] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6dc0d0/0x7206b0) succeed. 00:10:35.179 16:21:44 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:35.438 16:21:44 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:35.438 [2024-04-26 16:21:44.426502] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:35.439 16:21:44 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:35.697 16:21:44 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:35.956 Malloc0 00:10:35.956 16:21:44 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:36.215 Delay0 00:10:36.215 16:21:45 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.215 16:21:45 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:36.474 NULL1 00:10:36.474 16:21:45 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:36.733 16:21:45 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=398589 00:10:36.733 16:21:45 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:36.733 16:21:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:36.733 16:21:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.733 EAL: No free 2048 kB hugepages reported on node 1 00:10:38.111 Read completed with error (sct=0, sc=11) 00:10:38.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.111 16:21:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.111 16:21:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:10:38.111 16:21:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:38.111 true 00:10:38.111 16:21:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:38.111 16:21:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.047 16:21:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.307 16:21:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:39.307 16:21:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:39.307 true 00:10:39.307 16:21:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:39.307 16:21:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.245 16:21:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.245 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.505 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.505 16:21:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:40.505 16:21:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:40.505 true 00:10:40.505 16:21:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:40.505 16:21:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.448 16:21:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.448 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.707 16:21:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:41.707 16:21:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:41.707 true 00:10:41.707 16:21:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:41.707 16:21:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.644 16:21:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.644 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:42.902 16:21:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:42.903 16:21:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:42.903 true 00:10:42.903 16:21:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:42.903 16:21:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.839 16:21:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.098 16:21:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:44.098 16:21:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:44.098 true 00:10:44.098 16:21:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:44.098 16:21:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.033 16:21:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.292 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.292 16:21:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:45.292 16:21:54 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:45.292 true 00:10:45.550 16:21:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:45.550 16:21:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.376 16:21:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:46.376 16:21:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:46.376 16:21:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:46.635 true 00:10:46.635 16:21:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:46.635 16:21:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.572 16:21:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.572 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:47.572 16:21:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:47.572 16:21:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:47.831 true 00:10:47.831 16:21:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:47.831 16:21:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.766 16:21:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.766 16:21:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:48.766 16:21:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:49.024 true 00:10:49.024 16:21:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:49.024 16:21:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.960 16:21:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.960 16:21:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:49.960 16:21:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:50.219 true 00:10:50.219 16:21:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:50.219 16:21:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.155 16:22:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.155 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:51.413 16:22:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:51.413 16:22:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:51.413 true 00:10:51.413 16:22:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:51.413 16:22:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.348 16:22:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.606 16:22:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:52.606 16:22:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:52.606 true 00:10:52.606 16:22:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:52.606 16:22:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.544 16:22:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.544 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.803 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.803 16:22:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:53.803 16:22:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:53.803 true 00:10:53.803 16:22:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:53.803 16:22:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.738 16:22:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:54.996 16:22:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:54.996 16:22:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:54.996 true 00:10:54.996 16:22:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:54.996 16:22:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.932 16:22:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:55.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.191 16:22:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:56.191 16:22:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:56.191 true 00:10:56.191 16:22:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:56.191 16:22:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.127 16:22:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.127 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:57.386 16:22:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:57.386 16:22:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:57.645 true 00:10:57.645 16:22:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:57.645 16:22:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.582 16:22:07 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:58.582 16:22:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:58.582 16:22:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:58.841 true 00:10:58.841 16:22:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:10:58.841 16:22:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.776 16:22:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.776 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.776 16:22:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:59.776 16:22:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:00.035 true 00:11:00.035 16:22:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:00.035 16:22:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.970 16:22:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.970 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.970 16:22:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:11:00.970 16:22:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:01.241 true 00:11:01.241 16:22:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:01.241 16:22:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.185 16:22:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.185 16:22:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:11:02.185 16:22:11 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:02.443 true 00:11:02.443 16:22:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:02.443 16:22:11 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.379 16:22:12 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.379 16:22:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:11:03.379 16:22:12 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:03.638 true 00:11:03.638 16:22:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:03.638 16:22:12 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.575 16:22:13 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.575 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:04.575 16:22:13 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:11:04.575 16:22:13 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:04.834 true 00:11:04.834 16:22:13 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:04.834 16:22:13 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.770 16:22:14 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:05.770 16:22:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:11:05.770 16:22:14 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:06.029 true 00:11:06.029 16:22:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:06.029 16:22:14 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.967 16:22:15 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.967 16:22:15 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:11:06.967 16:22:15 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:07.226 true 00:11:07.226 16:22:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:07.226 16:22:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.484 16:22:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.743 16:22:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:11:07.743 16:22:16 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:07.743 true 00:11:07.743 16:22:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:07.743 16:22:16 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.002 16:22:16 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.261 16:22:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:11:08.261 16:22:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:08.261 true 00:11:08.261 16:22:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:08.261 16:22:17 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.519 16:22:17 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.778 16:22:17 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:11:08.778 16:22:17 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:09.037 Initializing NVMe Controllers 00:11:09.037 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:09.037 Controller IO queue size 128, less than required. 00:11:09.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:09.037 Controller IO queue size 128, less than required. 00:11:09.037 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:09.037 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:09.037 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:09.037 Initialization complete. Launching workers. 00:11:09.037 ======================================================== 00:11:09.037 Latency(us) 00:11:09.037 Device Information : IOPS MiB/s Average min max 00:11:09.037 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5857.77 2.86 19112.34 874.11 1138861.05 00:11:09.037 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33264.40 16.24 3847.95 2318.96 293511.39 00:11:09.037 ======================================================== 00:11:09.037 Total : 39122.17 19.10 6133.49 874.11 1138861.05 00:11:09.037 00:11:09.037 true 00:11:09.037 16:22:17 -- target/ns_hotplug_stress.sh@35 -- # kill -0 398589 00:11:09.037 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (398589) - No such process 00:11:09.037 16:22:17 -- target/ns_hotplug_stress.sh@44 -- # wait 398589 00:11:09.037 16:22:17 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:11:09.037 16:22:17 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:11:09.037 16:22:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:09.037 16:22:17 -- nvmf/common.sh@117 -- # sync 00:11:09.037 16:22:17 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:09.037 16:22:17 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:09.037 16:22:17 -- nvmf/common.sh@120 -- # set +e 00:11:09.037 16:22:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.037 16:22:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:09.037 rmmod nvme_rdma 00:11:09.037 rmmod nvme_fabrics 00:11:09.037 16:22:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.037 16:22:17 -- nvmf/common.sh@124 -- # set -e 00:11:09.037 16:22:17 -- nvmf/common.sh@125 -- # return 0 00:11:09.037 16:22:17 -- nvmf/common.sh@478 -- # '[' -n 398207 ']' 00:11:09.037 16:22:17 -- nvmf/common.sh@479 -- # killprocess 398207 00:11:09.037 16:22:17 -- common/autotest_common.sh@936 -- # '[' -z 398207 ']' 00:11:09.037 16:22:17 -- common/autotest_common.sh@940 -- # kill -0 398207 00:11:09.037 16:22:17 -- common/autotest_common.sh@941 -- # uname 00:11:09.037 16:22:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:09.037 16:22:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 398207 00:11:09.037 16:22:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:09.037 16:22:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:09.038 16:22:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 398207' 00:11:09.038 killing process with pid 398207 00:11:09.038 16:22:17 -- common/autotest_common.sh@955 -- # kill 398207 00:11:09.038 16:22:17 -- common/autotest_common.sh@960 -- # wait 398207 00:11:09.038 [2024-04-26 16:22:18.005331] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:09.296 16:22:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:09.296 16:22:18 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:09.296 00:11:09.296 real 0m41.317s 00:11:09.296 user 2m32.704s 00:11:09.296 sys 0m8.161s 00:11:09.296 16:22:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:09.296 16:22:18 -- common/autotest_common.sh@10 -- # set +x 00:11:09.296 ************************************ 00:11:09.296 END TEST nvmf_ns_hotplug_stress 00:11:09.296 ************************************ 00:11:09.296 16:22:18 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:09.296 16:22:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:09.296 16:22:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:09.296 16:22:18 -- common/autotest_common.sh@10 -- # set +x 00:11:09.555 ************************************ 00:11:09.555 START TEST nvmf_connect_stress 00:11:09.555 ************************************ 00:11:09.555 16:22:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:09.555 * Looking for test storage... 00:11:09.555 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:09.555 16:22:18 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.555 16:22:18 -- nvmf/common.sh@7 -- # uname -s 00:11:09.555 16:22:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.555 16:22:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.555 16:22:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.555 16:22:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.556 16:22:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.556 16:22:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.556 16:22:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.556 16:22:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.556 16:22:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.556 16:22:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.556 16:22:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:11:09.556 16:22:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:11:09.556 16:22:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.556 16:22:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.556 16:22:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.556 16:22:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.556 16:22:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:09.556 16:22:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.556 16:22:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.556 16:22:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.556 16:22:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.556 16:22:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.556 16:22:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.556 16:22:18 -- paths/export.sh@5 -- # export PATH 00:11:09.556 16:22:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.556 16:22:18 -- nvmf/common.sh@47 -- # : 0 00:11:09.556 16:22:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.556 16:22:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.556 16:22:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.556 16:22:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.556 16:22:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.556 16:22:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.556 16:22:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.556 16:22:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.556 16:22:18 -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:09.556 16:22:18 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:09.556 16:22:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.556 16:22:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:09.556 16:22:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:09.556 16:22:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:09.556 16:22:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.556 16:22:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.556 16:22:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.556 16:22:18 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:09.556 16:22:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:09.556 16:22:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.556 16:22:18 -- common/autotest_common.sh@10 -- # set +x 00:11:16.126 16:22:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:16.126 16:22:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:16.126 16:22:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:16.126 16:22:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:16.126 16:22:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:16.126 16:22:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:16.126 16:22:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:16.126 16:22:24 -- nvmf/common.sh@295 -- # net_devs=() 00:11:16.126 16:22:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:16.126 16:22:24 -- nvmf/common.sh@296 -- # e810=() 00:11:16.126 16:22:24 -- nvmf/common.sh@296 -- # local -ga e810 00:11:16.126 16:22:24 -- nvmf/common.sh@297 -- # x722=() 00:11:16.126 16:22:24 -- nvmf/common.sh@297 -- # local -ga x722 00:11:16.126 16:22:24 -- nvmf/common.sh@298 -- # mlx=() 00:11:16.126 16:22:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:16.126 16:22:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.126 16:22:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:16.126 16:22:24 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:16.126 16:22:24 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:16.126 16:22:24 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:16.126 16:22:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:16.126 16:22:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:11:16.126 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:11:16.126 16:22:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:16.126 16:22:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:11:16.126 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:11:16.126 16:22:24 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:16.126 16:22:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:16.126 16:22:24 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.126 16:22:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:16.126 16:22:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.126 16:22:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:16.126 Found net devices under 0000:18:00.0: mlx_0_0 00:11:16.126 16:22:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.126 16:22:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.126 16:22:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:16.126 16:22:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.126 16:22:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:16.126 Found net devices under 0000:18:00.1: mlx_0_1 00:11:16.126 16:22:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.126 16:22:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:16.126 16:22:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:16.126 16:22:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:16.126 16:22:24 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:16.126 16:22:24 -- nvmf/common.sh@58 -- # uname 00:11:16.126 16:22:24 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:16.126 16:22:24 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:16.126 16:22:24 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:16.126 16:22:24 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:16.126 16:22:24 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:16.126 16:22:24 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:16.126 16:22:24 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:16.126 16:22:24 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:16.126 16:22:24 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:16.126 16:22:24 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:16.126 16:22:24 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:16.126 16:22:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:16.126 16:22:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:16.126 16:22:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:16.126 16:22:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:16.126 16:22:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:16.126 16:22:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:16.126 16:22:24 -- nvmf/common.sh@105 -- # continue 2 00:11:16.126 16:22:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:16.126 16:22:24 -- nvmf/common.sh@105 -- # continue 2 00:11:16.126 16:22:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:16.126 16:22:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:16.126 16:22:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:16.126 16:22:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:16.126 16:22:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:16.126 16:22:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:16.126 16:22:24 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:16.126 16:22:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:16.126 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:16.126 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:11:16.126 altname enp24s0f0np0 00:11:16.126 altname ens785f0np0 00:11:16.126 inet 192.168.100.8/24 scope global mlx_0_0 00:11:16.126 valid_lft forever preferred_lft forever 00:11:16.126 16:22:24 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:16.126 16:22:24 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:16.126 16:22:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:16.126 16:22:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:16.126 16:22:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:16.126 16:22:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:16.126 16:22:24 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:16.126 16:22:24 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:16.126 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:16.126 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:11:16.126 altname enp24s0f1np1 00:11:16.126 altname ens785f1np1 00:11:16.126 inet 192.168.100.9/24 scope global mlx_0_1 00:11:16.126 valid_lft forever preferred_lft forever 00:11:16.126 16:22:24 -- nvmf/common.sh@411 -- # return 0 00:11:16.126 16:22:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:16.126 16:22:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:16.126 16:22:24 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:16.126 16:22:24 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:16.126 16:22:24 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:16.126 16:22:24 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:16.126 16:22:24 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:16.126 16:22:24 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:16.126 16:22:24 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:16.126 16:22:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:16.126 16:22:24 -- nvmf/common.sh@105 -- # continue 2 00:11:16.126 16:22:24 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:16.126 16:22:24 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:16.126 16:22:24 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:16.127 16:22:24 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:16.127 16:22:24 -- nvmf/common.sh@105 -- # continue 2 00:11:16.127 16:22:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:16.127 16:22:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:16.127 16:22:24 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:16.127 16:22:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:16.127 16:22:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:16.127 16:22:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:16.127 16:22:24 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:16.127 16:22:24 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:16.127 16:22:24 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:16.127 16:22:24 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:16.127 16:22:24 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:16.127 16:22:24 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:16.127 16:22:24 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:16.127 192.168.100.9' 00:11:16.127 16:22:24 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:16.127 192.168.100.9' 00:11:16.127 16:22:24 -- nvmf/common.sh@446 -- # head -n 1 00:11:16.127 16:22:24 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:16.127 16:22:24 -- nvmf/common.sh@447 -- # tail -n +2 00:11:16.127 16:22:24 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:16.127 192.168.100.9' 00:11:16.127 16:22:24 -- nvmf/common.sh@447 -- # head -n 1 00:11:16.127 16:22:24 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:16.127 16:22:24 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:16.127 16:22:24 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:16.127 16:22:24 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:16.127 16:22:24 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:16.127 16:22:24 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:16.127 16:22:24 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:16.127 16:22:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:16.127 16:22:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:16.127 16:22:24 -- common/autotest_common.sh@10 -- # set +x 00:11:16.127 16:22:24 -- nvmf/common.sh@470 -- # nvmfpid=405921 00:11:16.127 16:22:24 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:16.127 16:22:24 -- nvmf/common.sh@471 -- # waitforlisten 405921 00:11:16.127 16:22:24 -- common/autotest_common.sh@817 -- # '[' -z 405921 ']' 00:11:16.127 16:22:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.127 16:22:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:16.127 16:22:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.127 16:22:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:16.127 16:22:24 -- common/autotest_common.sh@10 -- # set +x 00:11:16.127 [2024-04-26 16:22:24.539994] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:11:16.127 [2024-04-26 16:22:24.540054] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:16.127 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.127 [2024-04-26 16:22:24.612010] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.127 [2024-04-26 16:22:24.692648] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.127 [2024-04-26 16:22:24.692697] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.127 [2024-04-26 16:22:24.692706] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.127 [2024-04-26 16:22:24.692730] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.127 [2024-04-26 16:22:24.692737] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.127 [2024-04-26 16:22:24.692845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.127 [2024-04-26 16:22:24.692925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.127 [2024-04-26 16:22:24.692927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.385 16:22:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:16.385 16:22:25 -- common/autotest_common.sh@850 -- # return 0 00:11:16.385 16:22:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:16.385 16:22:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:16.385 16:22:25 -- common/autotest_common.sh@10 -- # set +x 00:11:16.385 16:22:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.385 16:22:25 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:16.385 16:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.385 16:22:25 -- common/autotest_common.sh@10 -- # set +x 00:11:16.644 [2024-04-26 16:22:25.420708] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7a1b30/0x7a6020) succeed. 00:11:16.644 [2024-04-26 16:22:25.430834] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7a30d0/0x7e76b0) succeed. 00:11:16.644 16:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.644 16:22:25 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:16.644 16:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.644 16:22:25 -- common/autotest_common.sh@10 -- # set +x 00:11:16.644 16:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.644 16:22:25 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:16.644 16:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.644 16:22:25 -- common/autotest_common.sh@10 -- # set +x 00:11:16.644 [2024-04-26 16:22:25.550238] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:16.644 16:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.644 16:22:25 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:16.644 16:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.644 16:22:25 -- common/autotest_common.sh@10 -- # set +x 00:11:16.644 NULL1 00:11:16.644 16:22:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.644 16:22:25 -- target/connect_stress.sh@21 -- # PERF_PID=406039 00:11:16.644 16:22:25 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:16.644 16:22:25 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:16.644 16:22:25 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # seq 1 20 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 EAL: No free 2048 kB hugepages reported on node 1 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.644 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.644 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.645 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.903 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.903 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.903 16:22:25 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:16.903 16:22:25 -- target/connect_stress.sh@28 -- # cat 00:11:16.903 16:22:25 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:16.903 16:22:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.903 16:22:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.903 16:22:25 -- common/autotest_common.sh@10 -- # set +x 00:11:17.161 16:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.161 16:22:26 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:17.161 16:22:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.161 16:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.161 16:22:26 -- common/autotest_common.sh@10 -- # set +x 00:11:17.419 16:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.419 16:22:26 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:17.419 16:22:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.419 16:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.419 16:22:26 -- common/autotest_common.sh@10 -- # set +x 00:11:17.678 16:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.678 16:22:26 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:17.678 16:22:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.678 16:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.678 16:22:26 -- common/autotest_common.sh@10 -- # set +x 00:11:18.245 16:22:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.245 16:22:26 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:18.245 16:22:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.245 16:22:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.245 16:22:26 -- common/autotest_common.sh@10 -- # set +x 00:11:18.503 16:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.504 16:22:27 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:18.504 16:22:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.504 16:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.504 16:22:27 -- common/autotest_common.sh@10 -- # set +x 00:11:18.762 16:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.762 16:22:27 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:18.762 16:22:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.762 16:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.762 16:22:27 -- common/autotest_common.sh@10 -- # set +x 00:11:19.020 16:22:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.020 16:22:27 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:19.020 16:22:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.020 16:22:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.020 16:22:27 -- common/autotest_common.sh@10 -- # set +x 00:11:19.278 16:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.278 16:22:28 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:19.278 16:22:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.278 16:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.278 16:22:28 -- common/autotest_common.sh@10 -- # set +x 00:11:19.846 16:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:19.846 16:22:28 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:19.846 16:22:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.846 16:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:19.846 16:22:28 -- common/autotest_common.sh@10 -- # set +x 00:11:20.105 16:22:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.105 16:22:28 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:20.105 16:22:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.105 16:22:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.105 16:22:28 -- common/autotest_common.sh@10 -- # set +x 00:11:20.363 16:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.363 16:22:29 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:20.363 16:22:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.363 16:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.364 16:22:29 -- common/autotest_common.sh@10 -- # set +x 00:11:20.622 16:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:20.622 16:22:29 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:20.622 16:22:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.622 16:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:20.622 16:22:29 -- common/autotest_common.sh@10 -- # set +x 00:11:21.190 16:22:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.190 16:22:29 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:21.190 16:22:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.190 16:22:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.190 16:22:29 -- common/autotest_common.sh@10 -- # set +x 00:11:21.449 16:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.449 16:22:30 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:21.449 16:22:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.449 16:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.449 16:22:30 -- common/autotest_common.sh@10 -- # set +x 00:11:21.708 16:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.708 16:22:30 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:21.708 16:22:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.708 16:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.708 16:22:30 -- common/autotest_common.sh@10 -- # set +x 00:11:21.967 16:22:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:21.967 16:22:30 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:21.967 16:22:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.967 16:22:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:21.967 16:22:30 -- common/autotest_common.sh@10 -- # set +x 00:11:22.225 16:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.225 16:22:31 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:22.225 16:22:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.225 16:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.225 16:22:31 -- common/autotest_common.sh@10 -- # set +x 00:11:22.793 16:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:22.793 16:22:31 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:22.793 16:22:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.793 16:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:22.793 16:22:31 -- common/autotest_common.sh@10 -- # set +x 00:11:23.051 16:22:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.051 16:22:31 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:23.051 16:22:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.051 16:22:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.051 16:22:31 -- common/autotest_common.sh@10 -- # set +x 00:11:23.309 16:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.309 16:22:32 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:23.309 16:22:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.309 16:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.309 16:22:32 -- common/autotest_common.sh@10 -- # set +x 00:11:23.568 16:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.568 16:22:32 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:23.568 16:22:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.568 16:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.568 16:22:32 -- common/autotest_common.sh@10 -- # set +x 00:11:23.826 16:22:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:23.826 16:22:32 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:23.826 16:22:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.826 16:22:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:23.826 16:22:32 -- common/autotest_common.sh@10 -- # set +x 00:11:24.393 16:22:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.393 16:22:33 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:24.393 16:22:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.393 16:22:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.393 16:22:33 -- common/autotest_common.sh@10 -- # set +x 00:11:24.651 16:22:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.651 16:22:33 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:24.651 16:22:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.651 16:22:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.651 16:22:33 -- common/autotest_common.sh@10 -- # set +x 00:11:24.911 16:22:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.911 16:22:33 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:24.911 16:22:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.911 16:22:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.911 16:22:33 -- common/autotest_common.sh@10 -- # set +x 00:11:25.170 16:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.170 16:22:34 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:25.170 16:22:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.170 16:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.170 16:22:34 -- common/autotest_common.sh@10 -- # set +x 00:11:25.739 16:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.739 16:22:34 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:25.739 16:22:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.739 16:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.739 16:22:34 -- common/autotest_common.sh@10 -- # set +x 00:11:25.998 16:22:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.998 16:22:34 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:25.998 16:22:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.998 16:22:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.998 16:22:34 -- common/autotest_common.sh@10 -- # set +x 00:11:26.257 16:22:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.257 16:22:35 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:26.257 16:22:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.257 16:22:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.257 16:22:35 -- common/autotest_common.sh@10 -- # set +x 00:11:26.516 16:22:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.516 16:22:35 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:26.516 16:22:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.516 16:22:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.516 16:22:35 -- common/autotest_common.sh@10 -- # set +x 00:11:26.774 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:26.774 16:22:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.774 16:22:35 -- target/connect_stress.sh@34 -- # kill -0 406039 00:11:26.774 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (406039) - No such process 00:11:26.774 16:22:35 -- target/connect_stress.sh@38 -- # wait 406039 00:11:26.774 16:22:35 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:26.774 16:22:35 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:26.774 16:22:35 -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:26.774 16:22:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:26.774 16:22:35 -- nvmf/common.sh@117 -- # sync 00:11:26.774 16:22:35 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:26.774 16:22:35 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:26.774 16:22:35 -- nvmf/common.sh@120 -- # set +e 00:11:26.774 16:22:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:26.774 16:22:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:27.034 rmmod nvme_rdma 00:11:27.034 rmmod nvme_fabrics 00:11:27.034 16:22:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.034 16:22:35 -- nvmf/common.sh@124 -- # set -e 00:11:27.034 16:22:35 -- nvmf/common.sh@125 -- # return 0 00:11:27.034 16:22:35 -- nvmf/common.sh@478 -- # '[' -n 405921 ']' 00:11:27.034 16:22:35 -- nvmf/common.sh@479 -- # killprocess 405921 00:11:27.034 16:22:35 -- common/autotest_common.sh@936 -- # '[' -z 405921 ']' 00:11:27.034 16:22:35 -- common/autotest_common.sh@940 -- # kill -0 405921 00:11:27.034 16:22:35 -- common/autotest_common.sh@941 -- # uname 00:11:27.034 16:22:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:27.034 16:22:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 405921 00:11:27.034 16:22:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:27.034 16:22:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:27.034 16:22:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 405921' 00:11:27.034 killing process with pid 405921 00:11:27.034 16:22:35 -- common/autotest_common.sh@955 -- # kill 405921 00:11:27.034 16:22:35 -- common/autotest_common.sh@960 -- # wait 405921 00:11:27.034 [2024-04-26 16:22:35.976058] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:27.293 16:22:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:27.293 16:22:36 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:27.293 00:11:27.293 real 0m17.818s 00:11:27.293 user 0m42.129s 00:11:27.293 sys 0m7.238s 00:11:27.293 16:22:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:27.293 16:22:36 -- common/autotest_common.sh@10 -- # set +x 00:11:27.293 ************************************ 00:11:27.293 END TEST nvmf_connect_stress 00:11:27.293 ************************************ 00:11:27.293 16:22:36 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:27.293 16:22:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:27.293 16:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:27.293 16:22:36 -- common/autotest_common.sh@10 -- # set +x 00:11:27.552 ************************************ 00:11:27.552 START TEST nvmf_fused_ordering 00:11:27.552 ************************************ 00:11:27.552 16:22:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:27.552 * Looking for test storage... 00:11:27.552 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:27.552 16:22:36 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.552 16:22:36 -- nvmf/common.sh@7 -- # uname -s 00:11:27.552 16:22:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.552 16:22:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.552 16:22:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.552 16:22:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.552 16:22:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.552 16:22:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.552 16:22:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.552 16:22:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.552 16:22:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.552 16:22:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.553 16:22:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:11:27.553 16:22:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:11:27.553 16:22:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.553 16:22:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.553 16:22:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.553 16:22:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.553 16:22:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:27.553 16:22:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.553 16:22:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.553 16:22:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.553 16:22:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.553 16:22:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.553 16:22:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.553 16:22:36 -- paths/export.sh@5 -- # export PATH 00:11:27.553 16:22:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.553 16:22:36 -- nvmf/common.sh@47 -- # : 0 00:11:27.553 16:22:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.553 16:22:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.553 16:22:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.553 16:22:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.553 16:22:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.553 16:22:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.553 16:22:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.553 16:22:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.553 16:22:36 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:27.553 16:22:36 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:27.553 16:22:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.553 16:22:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:27.553 16:22:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:27.553 16:22:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:27.553 16:22:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.553 16:22:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:27.553 16:22:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.553 16:22:36 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:27.553 16:22:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:27.553 16:22:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.553 16:22:36 -- common/autotest_common.sh@10 -- # set +x 00:11:34.121 16:22:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:34.121 16:22:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.121 16:22:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.121 16:22:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.121 16:22:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.121 16:22:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.121 16:22:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.121 16:22:42 -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.121 16:22:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.121 16:22:42 -- nvmf/common.sh@296 -- # e810=() 00:11:34.121 16:22:42 -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.121 16:22:42 -- nvmf/common.sh@297 -- # x722=() 00:11:34.121 16:22:42 -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.121 16:22:42 -- nvmf/common.sh@298 -- # mlx=() 00:11:34.121 16:22:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.121 16:22:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.121 16:22:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.121 16:22:42 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:34.121 16:22:42 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:34.121 16:22:42 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:34.121 16:22:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.121 16:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.121 16:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:11:34.121 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:11:34.121 16:22:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:34.121 16:22:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.121 16:22:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:11:34.121 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:11:34.121 16:22:42 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:34.121 16:22:42 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:34.122 16:22:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.122 16:22:42 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.122 16:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:34.122 16:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.122 16:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:34.122 Found net devices under 0000:18:00.0: mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.122 16:22:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.122 16:22:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:34.122 16:22:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.122 16:22:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:34.122 Found net devices under 0000:18:00.1: mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.122 16:22:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:34.122 16:22:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:34.122 16:22:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:34.122 16:22:42 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:34.122 16:22:42 -- nvmf/common.sh@58 -- # uname 00:11:34.122 16:22:42 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:34.122 16:22:42 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:34.122 16:22:42 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:34.122 16:22:42 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:34.122 16:22:42 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:34.122 16:22:42 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:34.122 16:22:42 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:34.122 16:22:42 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:34.122 16:22:42 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:34.122 16:22:42 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:34.122 16:22:42 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:34.122 16:22:42 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:34.122 16:22:42 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:34.122 16:22:42 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:34.122 16:22:42 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:34.122 16:22:42 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:34.122 16:22:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@105 -- # continue 2 00:11:34.122 16:22:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@105 -- # continue 2 00:11:34.122 16:22:42 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:34.122 16:22:42 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:34.122 16:22:42 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:34.122 16:22:42 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:34.122 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:34.122 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:11:34.122 altname enp24s0f0np0 00:11:34.122 altname ens785f0np0 00:11:34.122 inet 192.168.100.8/24 scope global mlx_0_0 00:11:34.122 valid_lft forever preferred_lft forever 00:11:34.122 16:22:42 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:34.122 16:22:42 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:34.122 16:22:42 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:34.122 16:22:42 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:34.122 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:34.122 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:11:34.122 altname enp24s0f1np1 00:11:34.122 altname ens785f1np1 00:11:34.122 inet 192.168.100.9/24 scope global mlx_0_1 00:11:34.122 valid_lft forever preferred_lft forever 00:11:34.122 16:22:42 -- nvmf/common.sh@411 -- # return 0 00:11:34.122 16:22:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:34.122 16:22:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:34.122 16:22:42 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:34.122 16:22:42 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:34.122 16:22:42 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:34.122 16:22:42 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:34.122 16:22:42 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:34.122 16:22:42 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:34.122 16:22:42 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:34.122 16:22:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@105 -- # continue 2 00:11:34.122 16:22:42 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:34.122 16:22:42 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:34.122 16:22:42 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@105 -- # continue 2 00:11:34.122 16:22:42 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:34.122 16:22:42 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:34.122 16:22:42 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:34.122 16:22:42 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:34.122 16:22:42 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:34.122 16:22:42 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:34.122 192.168.100.9' 00:11:34.122 16:22:42 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:34.122 192.168.100.9' 00:11:34.122 16:22:42 -- nvmf/common.sh@446 -- # head -n 1 00:11:34.122 16:22:42 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:34.122 16:22:42 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:34.122 192.168.100.9' 00:11:34.122 16:22:42 -- nvmf/common.sh@447 -- # tail -n +2 00:11:34.122 16:22:42 -- nvmf/common.sh@447 -- # head -n 1 00:11:34.122 16:22:42 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:34.122 16:22:42 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:34.122 16:22:42 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:34.122 16:22:42 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:34.122 16:22:42 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:34.122 16:22:42 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:34.122 16:22:42 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:34.122 16:22:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:34.122 16:22:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:34.122 16:22:42 -- common/autotest_common.sh@10 -- # set +x 00:11:34.122 16:22:42 -- nvmf/common.sh@470 -- # nvmfpid=410354 00:11:34.122 16:22:42 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:34.122 16:22:42 -- nvmf/common.sh@471 -- # waitforlisten 410354 00:11:34.122 16:22:42 -- common/autotest_common.sh@817 -- # '[' -z 410354 ']' 00:11:34.122 16:22:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.122 16:22:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:34.122 16:22:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.122 16:22:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:34.122 16:22:42 -- common/autotest_common.sh@10 -- # set +x 00:11:34.122 [2024-04-26 16:22:42.520426] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:11:34.122 [2024-04-26 16:22:42.520482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.122 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.122 [2024-04-26 16:22:42.593962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.122 [2024-04-26 16:22:42.673520] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.122 [2024-04-26 16:22:42.673568] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.122 [2024-04-26 16:22:42.673577] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.122 [2024-04-26 16:22:42.673601] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.122 [2024-04-26 16:22:42.673609] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.122 [2024-04-26 16:22:42.673638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.380 16:22:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:34.380 16:22:43 -- common/autotest_common.sh@850 -- # return 0 00:11:34.380 16:22:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:34.380 16:22:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:34.380 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:11:34.380 16:22:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.380 16:22:43 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:34.380 16:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.380 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:11:34.380 [2024-04-26 16:22:43.396906] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x179b480/0x179f970) succeed. 00:11:34.639 [2024-04-26 16:22:43.406737] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x179c980/0x17e1000) succeed. 00:11:34.639 16:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.639 16:22:43 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:34.639 16:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.639 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 16:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.639 16:22:43 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:34.639 16:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.639 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 [2024-04-26 16:22:43.465938] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:34.639 16:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.639 16:22:43 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:34.639 16:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.639 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 NULL1 00:11:34.639 16:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.639 16:22:43 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:34.639 16:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.639 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 16:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.639 16:22:43 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:34.639 16:22:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:34.639 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:11:34.639 16:22:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:34.639 16:22:43 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:34.639 [2024-04-26 16:22:43.524813] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:11:34.639 [2024-04-26 16:22:43.524850] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410411 ] 00:11:34.639 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.898 Attached to nqn.2016-06.io.spdk:cnode1 00:11:34.898 Namespace ID: 1 size: 1GB 00:11:34.898 fused_ordering(0) 00:11:34.898 fused_ordering(1) 00:11:34.898 fused_ordering(2) 00:11:34.898 fused_ordering(3) 00:11:34.898 fused_ordering(4) 00:11:34.898 fused_ordering(5) 00:11:34.898 fused_ordering(6) 00:11:34.898 fused_ordering(7) 00:11:34.898 fused_ordering(8) 00:11:34.898 fused_ordering(9) 00:11:34.898 fused_ordering(10) 00:11:34.898 fused_ordering(11) 00:11:34.898 fused_ordering(12) 00:11:34.898 fused_ordering(13) 00:11:34.898 fused_ordering(14) 00:11:34.898 fused_ordering(15) 00:11:34.898 fused_ordering(16) 00:11:34.898 fused_ordering(17) 00:11:34.898 fused_ordering(18) 00:11:34.898 fused_ordering(19) 00:11:34.898 fused_ordering(20) 00:11:34.898 fused_ordering(21) 00:11:34.898 fused_ordering(22) 00:11:34.898 fused_ordering(23) 00:11:34.898 fused_ordering(24) 00:11:34.898 fused_ordering(25) 00:11:34.898 fused_ordering(26) 00:11:34.898 fused_ordering(27) 00:11:34.898 fused_ordering(28) 00:11:34.898 fused_ordering(29) 00:11:34.898 fused_ordering(30) 00:11:34.898 fused_ordering(31) 00:11:34.898 fused_ordering(32) 00:11:34.898 fused_ordering(33) 00:11:34.898 fused_ordering(34) 00:11:34.898 fused_ordering(35) 00:11:34.898 fused_ordering(36) 00:11:34.898 fused_ordering(37) 00:11:34.898 fused_ordering(38) 00:11:34.898 fused_ordering(39) 00:11:34.898 fused_ordering(40) 00:11:34.898 fused_ordering(41) 00:11:34.898 fused_ordering(42) 00:11:34.898 fused_ordering(43) 00:11:34.898 fused_ordering(44) 00:11:34.898 fused_ordering(45) 00:11:34.898 fused_ordering(46) 00:11:34.898 fused_ordering(47) 00:11:34.898 fused_ordering(48) 00:11:34.898 fused_ordering(49) 00:11:34.898 fused_ordering(50) 00:11:34.898 fused_ordering(51) 00:11:34.898 fused_ordering(52) 00:11:34.898 fused_ordering(53) 00:11:34.898 fused_ordering(54) 00:11:34.898 fused_ordering(55) 00:11:34.898 fused_ordering(56) 00:11:34.898 fused_ordering(57) 00:11:34.898 fused_ordering(58) 00:11:34.898 fused_ordering(59) 00:11:34.898 fused_ordering(60) 00:11:34.898 fused_ordering(61) 00:11:34.898 fused_ordering(62) 00:11:34.898 fused_ordering(63) 00:11:34.898 fused_ordering(64) 00:11:34.898 fused_ordering(65) 00:11:34.898 fused_ordering(66) 00:11:34.898 fused_ordering(67) 00:11:34.898 fused_ordering(68) 00:11:34.898 fused_ordering(69) 00:11:34.898 fused_ordering(70) 00:11:34.898 fused_ordering(71) 00:11:34.898 fused_ordering(72) 00:11:34.898 fused_ordering(73) 00:11:34.898 fused_ordering(74) 00:11:34.898 fused_ordering(75) 00:11:34.898 fused_ordering(76) 00:11:34.898 fused_ordering(77) 00:11:34.898 fused_ordering(78) 00:11:34.898 fused_ordering(79) 00:11:34.898 fused_ordering(80) 00:11:34.898 fused_ordering(81) 00:11:34.898 fused_ordering(82) 00:11:34.898 fused_ordering(83) 00:11:34.898 fused_ordering(84) 00:11:34.898 fused_ordering(85) 00:11:34.898 fused_ordering(86) 00:11:34.898 fused_ordering(87) 00:11:34.898 fused_ordering(88) 00:11:34.898 fused_ordering(89) 00:11:34.898 fused_ordering(90) 00:11:34.898 fused_ordering(91) 00:11:34.898 fused_ordering(92) 00:11:34.898 fused_ordering(93) 00:11:34.898 fused_ordering(94) 00:11:34.898 fused_ordering(95) 00:11:34.898 fused_ordering(96) 00:11:34.898 fused_ordering(97) 00:11:34.898 fused_ordering(98) 00:11:34.898 fused_ordering(99) 00:11:34.898 fused_ordering(100) 00:11:34.898 fused_ordering(101) 00:11:34.898 fused_ordering(102) 00:11:34.898 fused_ordering(103) 00:11:34.898 fused_ordering(104) 00:11:34.898 fused_ordering(105) 00:11:34.898 fused_ordering(106) 00:11:34.898 fused_ordering(107) 00:11:34.898 fused_ordering(108) 00:11:34.898 fused_ordering(109) 00:11:34.898 fused_ordering(110) 00:11:34.898 fused_ordering(111) 00:11:34.898 fused_ordering(112) 00:11:34.898 fused_ordering(113) 00:11:34.898 fused_ordering(114) 00:11:34.898 fused_ordering(115) 00:11:34.898 fused_ordering(116) 00:11:34.898 fused_ordering(117) 00:11:34.898 fused_ordering(118) 00:11:34.898 fused_ordering(119) 00:11:34.898 fused_ordering(120) 00:11:34.898 fused_ordering(121) 00:11:34.898 fused_ordering(122) 00:11:34.898 fused_ordering(123) 00:11:34.898 fused_ordering(124) 00:11:34.898 fused_ordering(125) 00:11:34.898 fused_ordering(126) 00:11:34.898 fused_ordering(127) 00:11:34.898 fused_ordering(128) 00:11:34.898 fused_ordering(129) 00:11:34.898 fused_ordering(130) 00:11:34.898 fused_ordering(131) 00:11:34.898 fused_ordering(132) 00:11:34.898 fused_ordering(133) 00:11:34.898 fused_ordering(134) 00:11:34.898 fused_ordering(135) 00:11:34.898 fused_ordering(136) 00:11:34.898 fused_ordering(137) 00:11:34.898 fused_ordering(138) 00:11:34.898 fused_ordering(139) 00:11:34.898 fused_ordering(140) 00:11:34.898 fused_ordering(141) 00:11:34.898 fused_ordering(142) 00:11:34.898 fused_ordering(143) 00:11:34.898 fused_ordering(144) 00:11:34.898 fused_ordering(145) 00:11:34.898 fused_ordering(146) 00:11:34.898 fused_ordering(147) 00:11:34.898 fused_ordering(148) 00:11:34.898 fused_ordering(149) 00:11:34.898 fused_ordering(150) 00:11:34.898 fused_ordering(151) 00:11:34.898 fused_ordering(152) 00:11:34.898 fused_ordering(153) 00:11:34.898 fused_ordering(154) 00:11:34.898 fused_ordering(155) 00:11:34.898 fused_ordering(156) 00:11:34.898 fused_ordering(157) 00:11:34.898 fused_ordering(158) 00:11:34.898 fused_ordering(159) 00:11:34.898 fused_ordering(160) 00:11:34.898 fused_ordering(161) 00:11:34.898 fused_ordering(162) 00:11:34.898 fused_ordering(163) 00:11:34.898 fused_ordering(164) 00:11:34.898 fused_ordering(165) 00:11:34.898 fused_ordering(166) 00:11:34.898 fused_ordering(167) 00:11:34.898 fused_ordering(168) 00:11:34.898 fused_ordering(169) 00:11:34.898 fused_ordering(170) 00:11:34.898 fused_ordering(171) 00:11:34.898 fused_ordering(172) 00:11:34.898 fused_ordering(173) 00:11:34.898 fused_ordering(174) 00:11:34.898 fused_ordering(175) 00:11:34.899 fused_ordering(176) 00:11:34.899 fused_ordering(177) 00:11:34.899 fused_ordering(178) 00:11:34.899 fused_ordering(179) 00:11:34.899 fused_ordering(180) 00:11:34.899 fused_ordering(181) 00:11:34.899 fused_ordering(182) 00:11:34.899 fused_ordering(183) 00:11:34.899 fused_ordering(184) 00:11:34.899 fused_ordering(185) 00:11:34.899 fused_ordering(186) 00:11:34.899 fused_ordering(187) 00:11:34.899 fused_ordering(188) 00:11:34.899 fused_ordering(189) 00:11:34.899 fused_ordering(190) 00:11:34.899 fused_ordering(191) 00:11:34.899 fused_ordering(192) 00:11:34.899 fused_ordering(193) 00:11:34.899 fused_ordering(194) 00:11:34.899 fused_ordering(195) 00:11:34.899 fused_ordering(196) 00:11:34.899 fused_ordering(197) 00:11:34.899 fused_ordering(198) 00:11:34.899 fused_ordering(199) 00:11:34.899 fused_ordering(200) 00:11:34.899 fused_ordering(201) 00:11:34.899 fused_ordering(202) 00:11:34.899 fused_ordering(203) 00:11:34.899 fused_ordering(204) 00:11:34.899 fused_ordering(205) 00:11:34.899 fused_ordering(206) 00:11:34.899 fused_ordering(207) 00:11:34.899 fused_ordering(208) 00:11:34.899 fused_ordering(209) 00:11:34.899 fused_ordering(210) 00:11:34.899 fused_ordering(211) 00:11:34.899 fused_ordering(212) 00:11:34.899 fused_ordering(213) 00:11:34.899 fused_ordering(214) 00:11:34.899 fused_ordering(215) 00:11:34.899 fused_ordering(216) 00:11:34.899 fused_ordering(217) 00:11:34.899 fused_ordering(218) 00:11:34.899 fused_ordering(219) 00:11:34.899 fused_ordering(220) 00:11:34.899 fused_ordering(221) 00:11:34.899 fused_ordering(222) 00:11:34.899 fused_ordering(223) 00:11:34.899 fused_ordering(224) 00:11:34.899 fused_ordering(225) 00:11:34.899 fused_ordering(226) 00:11:34.899 fused_ordering(227) 00:11:34.899 fused_ordering(228) 00:11:34.899 fused_ordering(229) 00:11:34.899 fused_ordering(230) 00:11:34.899 fused_ordering(231) 00:11:34.899 fused_ordering(232) 00:11:34.899 fused_ordering(233) 00:11:34.899 fused_ordering(234) 00:11:34.899 fused_ordering(235) 00:11:34.899 fused_ordering(236) 00:11:34.899 fused_ordering(237) 00:11:34.899 fused_ordering(238) 00:11:34.899 fused_ordering(239) 00:11:34.899 fused_ordering(240) 00:11:34.899 fused_ordering(241) 00:11:34.899 fused_ordering(242) 00:11:34.899 fused_ordering(243) 00:11:34.899 fused_ordering(244) 00:11:34.899 fused_ordering(245) 00:11:34.899 fused_ordering(246) 00:11:34.899 fused_ordering(247) 00:11:34.899 fused_ordering(248) 00:11:34.899 fused_ordering(249) 00:11:34.899 fused_ordering(250) 00:11:34.899 fused_ordering(251) 00:11:34.899 fused_ordering(252) 00:11:34.899 fused_ordering(253) 00:11:34.899 fused_ordering(254) 00:11:34.899 fused_ordering(255) 00:11:34.899 fused_ordering(256) 00:11:34.899 fused_ordering(257) 00:11:34.899 fused_ordering(258) 00:11:34.899 fused_ordering(259) 00:11:34.899 fused_ordering(260) 00:11:34.899 fused_ordering(261) 00:11:34.899 fused_ordering(262) 00:11:34.899 fused_ordering(263) 00:11:34.899 fused_ordering(264) 00:11:34.899 fused_ordering(265) 00:11:34.899 fused_ordering(266) 00:11:34.899 fused_ordering(267) 00:11:34.899 fused_ordering(268) 00:11:34.899 fused_ordering(269) 00:11:34.899 fused_ordering(270) 00:11:34.899 fused_ordering(271) 00:11:34.899 fused_ordering(272) 00:11:34.899 fused_ordering(273) 00:11:34.899 fused_ordering(274) 00:11:34.899 fused_ordering(275) 00:11:34.899 fused_ordering(276) 00:11:34.899 fused_ordering(277) 00:11:34.899 fused_ordering(278) 00:11:34.899 fused_ordering(279) 00:11:34.899 fused_ordering(280) 00:11:34.899 fused_ordering(281) 00:11:34.899 fused_ordering(282) 00:11:34.899 fused_ordering(283) 00:11:34.899 fused_ordering(284) 00:11:34.899 fused_ordering(285) 00:11:34.899 fused_ordering(286) 00:11:34.899 fused_ordering(287) 00:11:34.899 fused_ordering(288) 00:11:34.899 fused_ordering(289) 00:11:34.899 fused_ordering(290) 00:11:34.899 fused_ordering(291) 00:11:34.899 fused_ordering(292) 00:11:34.899 fused_ordering(293) 00:11:34.899 fused_ordering(294) 00:11:34.899 fused_ordering(295) 00:11:34.899 fused_ordering(296) 00:11:34.899 fused_ordering(297) 00:11:34.899 fused_ordering(298) 00:11:34.899 fused_ordering(299) 00:11:34.899 fused_ordering(300) 00:11:34.899 fused_ordering(301) 00:11:34.899 fused_ordering(302) 00:11:34.899 fused_ordering(303) 00:11:34.899 fused_ordering(304) 00:11:34.899 fused_ordering(305) 00:11:34.899 fused_ordering(306) 00:11:34.899 fused_ordering(307) 00:11:34.899 fused_ordering(308) 00:11:34.899 fused_ordering(309) 00:11:34.899 fused_ordering(310) 00:11:34.899 fused_ordering(311) 00:11:34.899 fused_ordering(312) 00:11:34.899 fused_ordering(313) 00:11:34.899 fused_ordering(314) 00:11:34.899 fused_ordering(315) 00:11:34.899 fused_ordering(316) 00:11:34.899 fused_ordering(317) 00:11:34.899 fused_ordering(318) 00:11:34.899 fused_ordering(319) 00:11:34.899 fused_ordering(320) 00:11:34.899 fused_ordering(321) 00:11:34.899 fused_ordering(322) 00:11:34.899 fused_ordering(323) 00:11:34.899 fused_ordering(324) 00:11:34.899 fused_ordering(325) 00:11:34.899 fused_ordering(326) 00:11:34.899 fused_ordering(327) 00:11:34.899 fused_ordering(328) 00:11:34.899 fused_ordering(329) 00:11:34.899 fused_ordering(330) 00:11:34.899 fused_ordering(331) 00:11:34.899 fused_ordering(332) 00:11:34.899 fused_ordering(333) 00:11:34.899 fused_ordering(334) 00:11:34.899 fused_ordering(335) 00:11:34.899 fused_ordering(336) 00:11:34.899 fused_ordering(337) 00:11:34.899 fused_ordering(338) 00:11:34.899 fused_ordering(339) 00:11:34.899 fused_ordering(340) 00:11:34.899 fused_ordering(341) 00:11:34.899 fused_ordering(342) 00:11:34.899 fused_ordering(343) 00:11:34.899 fused_ordering(344) 00:11:34.899 fused_ordering(345) 00:11:34.899 fused_ordering(346) 00:11:34.899 fused_ordering(347) 00:11:34.899 fused_ordering(348) 00:11:34.899 fused_ordering(349) 00:11:34.899 fused_ordering(350) 00:11:34.899 fused_ordering(351) 00:11:34.899 fused_ordering(352) 00:11:34.899 fused_ordering(353) 00:11:34.899 fused_ordering(354) 00:11:34.899 fused_ordering(355) 00:11:34.899 fused_ordering(356) 00:11:34.899 fused_ordering(357) 00:11:34.899 fused_ordering(358) 00:11:34.899 fused_ordering(359) 00:11:34.899 fused_ordering(360) 00:11:34.899 fused_ordering(361) 00:11:34.899 fused_ordering(362) 00:11:34.899 fused_ordering(363) 00:11:34.899 fused_ordering(364) 00:11:34.899 fused_ordering(365) 00:11:34.899 fused_ordering(366) 00:11:34.899 fused_ordering(367) 00:11:34.899 fused_ordering(368) 00:11:34.899 fused_ordering(369) 00:11:34.899 fused_ordering(370) 00:11:34.899 fused_ordering(371) 00:11:34.899 fused_ordering(372) 00:11:34.899 fused_ordering(373) 00:11:34.899 fused_ordering(374) 00:11:34.899 fused_ordering(375) 00:11:34.899 fused_ordering(376) 00:11:34.899 fused_ordering(377) 00:11:34.899 fused_ordering(378) 00:11:34.899 fused_ordering(379) 00:11:34.899 fused_ordering(380) 00:11:34.899 fused_ordering(381) 00:11:34.899 fused_ordering(382) 00:11:34.899 fused_ordering(383) 00:11:34.899 fused_ordering(384) 00:11:34.899 fused_ordering(385) 00:11:34.899 fused_ordering(386) 00:11:34.899 fused_ordering(387) 00:11:34.899 fused_ordering(388) 00:11:34.899 fused_ordering(389) 00:11:34.899 fused_ordering(390) 00:11:34.899 fused_ordering(391) 00:11:34.899 fused_ordering(392) 00:11:34.899 fused_ordering(393) 00:11:34.899 fused_ordering(394) 00:11:34.899 fused_ordering(395) 00:11:34.899 fused_ordering(396) 00:11:34.899 fused_ordering(397) 00:11:34.899 fused_ordering(398) 00:11:34.899 fused_ordering(399) 00:11:34.899 fused_ordering(400) 00:11:34.899 fused_ordering(401) 00:11:34.899 fused_ordering(402) 00:11:34.899 fused_ordering(403) 00:11:34.899 fused_ordering(404) 00:11:34.899 fused_ordering(405) 00:11:34.899 fused_ordering(406) 00:11:34.900 fused_ordering(407) 00:11:34.900 fused_ordering(408) 00:11:34.900 fused_ordering(409) 00:11:34.900 fused_ordering(410) 00:11:34.900 fused_ordering(411) 00:11:34.900 fused_ordering(412) 00:11:34.900 fused_ordering(413) 00:11:34.900 fused_ordering(414) 00:11:34.900 fused_ordering(415) 00:11:34.900 fused_ordering(416) 00:11:34.900 fused_ordering(417) 00:11:34.900 fused_ordering(418) 00:11:34.900 fused_ordering(419) 00:11:34.900 fused_ordering(420) 00:11:34.900 fused_ordering(421) 00:11:34.900 fused_ordering(422) 00:11:34.900 fused_ordering(423) 00:11:34.900 fused_ordering(424) 00:11:34.900 fused_ordering(425) 00:11:34.900 fused_ordering(426) 00:11:34.900 fused_ordering(427) 00:11:34.900 fused_ordering(428) 00:11:34.900 fused_ordering(429) 00:11:34.900 fused_ordering(430) 00:11:34.900 fused_ordering(431) 00:11:34.900 fused_ordering(432) 00:11:34.900 fused_ordering(433) 00:11:34.900 fused_ordering(434) 00:11:34.900 fused_ordering(435) 00:11:34.900 fused_ordering(436) 00:11:34.900 fused_ordering(437) 00:11:34.900 fused_ordering(438) 00:11:34.900 fused_ordering(439) 00:11:34.900 fused_ordering(440) 00:11:34.900 fused_ordering(441) 00:11:34.900 fused_ordering(442) 00:11:34.900 fused_ordering(443) 00:11:34.900 fused_ordering(444) 00:11:34.900 fused_ordering(445) 00:11:34.900 fused_ordering(446) 00:11:34.900 fused_ordering(447) 00:11:34.900 fused_ordering(448) 00:11:34.900 fused_ordering(449) 00:11:34.900 fused_ordering(450) 00:11:34.900 fused_ordering(451) 00:11:34.900 fused_ordering(452) 00:11:34.900 fused_ordering(453) 00:11:34.900 fused_ordering(454) 00:11:34.900 fused_ordering(455) 00:11:34.900 fused_ordering(456) 00:11:34.900 fused_ordering(457) 00:11:34.900 fused_ordering(458) 00:11:34.900 fused_ordering(459) 00:11:34.900 fused_ordering(460) 00:11:34.900 fused_ordering(461) 00:11:34.900 fused_ordering(462) 00:11:34.900 fused_ordering(463) 00:11:34.900 fused_ordering(464) 00:11:34.900 fused_ordering(465) 00:11:34.900 fused_ordering(466) 00:11:34.900 fused_ordering(467) 00:11:34.900 fused_ordering(468) 00:11:34.900 fused_ordering(469) 00:11:34.900 fused_ordering(470) 00:11:34.900 fused_ordering(471) 00:11:34.900 fused_ordering(472) 00:11:34.900 fused_ordering(473) 00:11:34.900 fused_ordering(474) 00:11:34.900 fused_ordering(475) 00:11:34.900 fused_ordering(476) 00:11:34.900 fused_ordering(477) 00:11:34.900 fused_ordering(478) 00:11:34.900 fused_ordering(479) 00:11:34.900 fused_ordering(480) 00:11:34.900 fused_ordering(481) 00:11:34.900 fused_ordering(482) 00:11:34.900 fused_ordering(483) 00:11:34.900 fused_ordering(484) 00:11:34.900 fused_ordering(485) 00:11:34.900 fused_ordering(486) 00:11:34.900 fused_ordering(487) 00:11:34.900 fused_ordering(488) 00:11:34.900 fused_ordering(489) 00:11:34.900 fused_ordering(490) 00:11:34.900 fused_ordering(491) 00:11:34.900 fused_ordering(492) 00:11:34.900 fused_ordering(493) 00:11:34.900 fused_ordering(494) 00:11:34.900 fused_ordering(495) 00:11:34.900 fused_ordering(496) 00:11:34.900 fused_ordering(497) 00:11:34.900 fused_ordering(498) 00:11:34.900 fused_ordering(499) 00:11:34.900 fused_ordering(500) 00:11:34.900 fused_ordering(501) 00:11:34.900 fused_ordering(502) 00:11:34.900 fused_ordering(503) 00:11:34.900 fused_ordering(504) 00:11:34.900 fused_ordering(505) 00:11:34.900 fused_ordering(506) 00:11:34.900 fused_ordering(507) 00:11:34.900 fused_ordering(508) 00:11:34.900 fused_ordering(509) 00:11:34.900 fused_ordering(510) 00:11:34.900 fused_ordering(511) 00:11:34.900 fused_ordering(512) 00:11:34.900 fused_ordering(513) 00:11:34.900 fused_ordering(514) 00:11:34.900 fused_ordering(515) 00:11:34.900 fused_ordering(516) 00:11:34.900 fused_ordering(517) 00:11:34.900 fused_ordering(518) 00:11:34.900 fused_ordering(519) 00:11:34.900 fused_ordering(520) 00:11:34.900 fused_ordering(521) 00:11:34.900 fused_ordering(522) 00:11:34.900 fused_ordering(523) 00:11:34.900 fused_ordering(524) 00:11:34.900 fused_ordering(525) 00:11:34.900 fused_ordering(526) 00:11:34.900 fused_ordering(527) 00:11:34.900 fused_ordering(528) 00:11:34.900 fused_ordering(529) 00:11:34.900 fused_ordering(530) 00:11:34.900 fused_ordering(531) 00:11:34.900 fused_ordering(532) 00:11:34.900 fused_ordering(533) 00:11:34.900 fused_ordering(534) 00:11:34.900 fused_ordering(535) 00:11:34.900 fused_ordering(536) 00:11:34.900 fused_ordering(537) 00:11:34.900 fused_ordering(538) 00:11:34.900 fused_ordering(539) 00:11:34.900 fused_ordering(540) 00:11:34.900 fused_ordering(541) 00:11:34.900 fused_ordering(542) 00:11:34.900 fused_ordering(543) 00:11:34.900 fused_ordering(544) 00:11:34.900 fused_ordering(545) 00:11:34.900 fused_ordering(546) 00:11:34.900 fused_ordering(547) 00:11:34.900 fused_ordering(548) 00:11:34.900 fused_ordering(549) 00:11:34.900 fused_ordering(550) 00:11:34.900 fused_ordering(551) 00:11:34.900 fused_ordering(552) 00:11:34.900 fused_ordering(553) 00:11:34.900 fused_ordering(554) 00:11:34.900 fused_ordering(555) 00:11:34.900 fused_ordering(556) 00:11:34.900 fused_ordering(557) 00:11:34.900 fused_ordering(558) 00:11:34.900 fused_ordering(559) 00:11:34.900 fused_ordering(560) 00:11:34.900 fused_ordering(561) 00:11:34.900 fused_ordering(562) 00:11:34.900 fused_ordering(563) 00:11:34.900 fused_ordering(564) 00:11:34.900 fused_ordering(565) 00:11:34.900 fused_ordering(566) 00:11:34.900 fused_ordering(567) 00:11:34.900 fused_ordering(568) 00:11:34.900 fused_ordering(569) 00:11:34.900 fused_ordering(570) 00:11:34.900 fused_ordering(571) 00:11:34.900 fused_ordering(572) 00:11:34.900 fused_ordering(573) 00:11:34.900 fused_ordering(574) 00:11:34.900 fused_ordering(575) 00:11:34.900 fused_ordering(576) 00:11:34.900 fused_ordering(577) 00:11:34.900 fused_ordering(578) 00:11:34.900 fused_ordering(579) 00:11:34.900 fused_ordering(580) 00:11:34.900 fused_ordering(581) 00:11:34.900 fused_ordering(582) 00:11:34.900 fused_ordering(583) 00:11:34.900 fused_ordering(584) 00:11:34.900 fused_ordering(585) 00:11:34.900 fused_ordering(586) 00:11:34.900 fused_ordering(587) 00:11:34.900 fused_ordering(588) 00:11:34.900 fused_ordering(589) 00:11:34.900 fused_ordering(590) 00:11:34.900 fused_ordering(591) 00:11:34.900 fused_ordering(592) 00:11:34.900 fused_ordering(593) 00:11:34.900 fused_ordering(594) 00:11:34.900 fused_ordering(595) 00:11:34.900 fused_ordering(596) 00:11:34.900 fused_ordering(597) 00:11:34.900 fused_ordering(598) 00:11:34.900 fused_ordering(599) 00:11:34.900 fused_ordering(600) 00:11:34.900 fused_ordering(601) 00:11:34.900 fused_ordering(602) 00:11:34.900 fused_ordering(603) 00:11:34.900 fused_ordering(604) 00:11:34.900 fused_ordering(605) 00:11:34.900 fused_ordering(606) 00:11:34.900 fused_ordering(607) 00:11:34.900 fused_ordering(608) 00:11:34.900 fused_ordering(609) 00:11:34.900 fused_ordering(610) 00:11:34.900 fused_ordering(611) 00:11:34.900 fused_ordering(612) 00:11:34.900 fused_ordering(613) 00:11:34.900 fused_ordering(614) 00:11:34.900 fused_ordering(615) 00:11:35.159 fused_ordering(616) 00:11:35.159 fused_ordering(617) 00:11:35.159 fused_ordering(618) 00:11:35.159 fused_ordering(619) 00:11:35.159 fused_ordering(620) 00:11:35.159 fused_ordering(621) 00:11:35.159 fused_ordering(622) 00:11:35.159 fused_ordering(623) 00:11:35.159 fused_ordering(624) 00:11:35.159 fused_ordering(625) 00:11:35.159 fused_ordering(626) 00:11:35.159 fused_ordering(627) 00:11:35.159 fused_ordering(628) 00:11:35.159 fused_ordering(629) 00:11:35.159 fused_ordering(630) 00:11:35.159 fused_ordering(631) 00:11:35.159 fused_ordering(632) 00:11:35.159 fused_ordering(633) 00:11:35.159 fused_ordering(634) 00:11:35.159 fused_ordering(635) 00:11:35.159 fused_ordering(636) 00:11:35.159 fused_ordering(637) 00:11:35.159 fused_ordering(638) 00:11:35.159 fused_ordering(639) 00:11:35.159 fused_ordering(640) 00:11:35.159 fused_ordering(641) 00:11:35.159 fused_ordering(642) 00:11:35.159 fused_ordering(643) 00:11:35.159 fused_ordering(644) 00:11:35.159 fused_ordering(645) 00:11:35.159 fused_ordering(646) 00:11:35.159 fused_ordering(647) 00:11:35.159 fused_ordering(648) 00:11:35.159 fused_ordering(649) 00:11:35.159 fused_ordering(650) 00:11:35.159 fused_ordering(651) 00:11:35.159 fused_ordering(652) 00:11:35.159 fused_ordering(653) 00:11:35.159 fused_ordering(654) 00:11:35.159 fused_ordering(655) 00:11:35.159 fused_ordering(656) 00:11:35.159 fused_ordering(657) 00:11:35.159 fused_ordering(658) 00:11:35.159 fused_ordering(659) 00:11:35.159 fused_ordering(660) 00:11:35.159 fused_ordering(661) 00:11:35.159 fused_ordering(662) 00:11:35.159 fused_ordering(663) 00:11:35.159 fused_ordering(664) 00:11:35.159 fused_ordering(665) 00:11:35.159 fused_ordering(666) 00:11:35.159 fused_ordering(667) 00:11:35.159 fused_ordering(668) 00:11:35.159 fused_ordering(669) 00:11:35.159 fused_ordering(670) 00:11:35.159 fused_ordering(671) 00:11:35.159 fused_ordering(672) 00:11:35.159 fused_ordering(673) 00:11:35.159 fused_ordering(674) 00:11:35.159 fused_ordering(675) 00:11:35.159 fused_ordering(676) 00:11:35.159 fused_ordering(677) 00:11:35.159 fused_ordering(678) 00:11:35.159 fused_ordering(679) 00:11:35.159 fused_ordering(680) 00:11:35.159 fused_ordering(681) 00:11:35.159 fused_ordering(682) 00:11:35.159 fused_ordering(683) 00:11:35.159 fused_ordering(684) 00:11:35.159 fused_ordering(685) 00:11:35.159 fused_ordering(686) 00:11:35.159 fused_ordering(687) 00:11:35.159 fused_ordering(688) 00:11:35.159 fused_ordering(689) 00:11:35.159 fused_ordering(690) 00:11:35.159 fused_ordering(691) 00:11:35.159 fused_ordering(692) 00:11:35.159 fused_ordering(693) 00:11:35.159 fused_ordering(694) 00:11:35.159 fused_ordering(695) 00:11:35.159 fused_ordering(696) 00:11:35.159 fused_ordering(697) 00:11:35.159 fused_ordering(698) 00:11:35.159 fused_ordering(699) 00:11:35.159 fused_ordering(700) 00:11:35.159 fused_ordering(701) 00:11:35.159 fused_ordering(702) 00:11:35.159 fused_ordering(703) 00:11:35.159 fused_ordering(704) 00:11:35.159 fused_ordering(705) 00:11:35.159 fused_ordering(706) 00:11:35.159 fused_ordering(707) 00:11:35.159 fused_ordering(708) 00:11:35.159 fused_ordering(709) 00:11:35.159 fused_ordering(710) 00:11:35.159 fused_ordering(711) 00:11:35.159 fused_ordering(712) 00:11:35.159 fused_ordering(713) 00:11:35.159 fused_ordering(714) 00:11:35.159 fused_ordering(715) 00:11:35.159 fused_ordering(716) 00:11:35.159 fused_ordering(717) 00:11:35.159 fused_ordering(718) 00:11:35.159 fused_ordering(719) 00:11:35.159 fused_ordering(720) 00:11:35.159 fused_ordering(721) 00:11:35.159 fused_ordering(722) 00:11:35.159 fused_ordering(723) 00:11:35.159 fused_ordering(724) 00:11:35.159 fused_ordering(725) 00:11:35.159 fused_ordering(726) 00:11:35.159 fused_ordering(727) 00:11:35.159 fused_ordering(728) 00:11:35.159 fused_ordering(729) 00:11:35.159 fused_ordering(730) 00:11:35.159 fused_ordering(731) 00:11:35.159 fused_ordering(732) 00:11:35.159 fused_ordering(733) 00:11:35.159 fused_ordering(734) 00:11:35.159 fused_ordering(735) 00:11:35.159 fused_ordering(736) 00:11:35.159 fused_ordering(737) 00:11:35.159 fused_ordering(738) 00:11:35.159 fused_ordering(739) 00:11:35.159 fused_ordering(740) 00:11:35.159 fused_ordering(741) 00:11:35.159 fused_ordering(742) 00:11:35.159 fused_ordering(743) 00:11:35.159 fused_ordering(744) 00:11:35.159 fused_ordering(745) 00:11:35.159 fused_ordering(746) 00:11:35.159 fused_ordering(747) 00:11:35.159 fused_ordering(748) 00:11:35.159 fused_ordering(749) 00:11:35.159 fused_ordering(750) 00:11:35.159 fused_ordering(751) 00:11:35.159 fused_ordering(752) 00:11:35.159 fused_ordering(753) 00:11:35.159 fused_ordering(754) 00:11:35.159 fused_ordering(755) 00:11:35.159 fused_ordering(756) 00:11:35.159 fused_ordering(757) 00:11:35.159 fused_ordering(758) 00:11:35.159 fused_ordering(759) 00:11:35.159 fused_ordering(760) 00:11:35.159 fused_ordering(761) 00:11:35.159 fused_ordering(762) 00:11:35.159 fused_ordering(763) 00:11:35.159 fused_ordering(764) 00:11:35.159 fused_ordering(765) 00:11:35.159 fused_ordering(766) 00:11:35.159 fused_ordering(767) 00:11:35.159 fused_ordering(768) 00:11:35.159 fused_ordering(769) 00:11:35.159 fused_ordering(770) 00:11:35.159 fused_ordering(771) 00:11:35.159 fused_ordering(772) 00:11:35.159 fused_ordering(773) 00:11:35.159 fused_ordering(774) 00:11:35.159 fused_ordering(775) 00:11:35.159 fused_ordering(776) 00:11:35.159 fused_ordering(777) 00:11:35.159 fused_ordering(778) 00:11:35.159 fused_ordering(779) 00:11:35.159 fused_ordering(780) 00:11:35.159 fused_ordering(781) 00:11:35.159 fused_ordering(782) 00:11:35.159 fused_ordering(783) 00:11:35.159 fused_ordering(784) 00:11:35.159 fused_ordering(785) 00:11:35.159 fused_ordering(786) 00:11:35.159 fused_ordering(787) 00:11:35.159 fused_ordering(788) 00:11:35.159 fused_ordering(789) 00:11:35.159 fused_ordering(790) 00:11:35.159 fused_ordering(791) 00:11:35.159 fused_ordering(792) 00:11:35.159 fused_ordering(793) 00:11:35.159 fused_ordering(794) 00:11:35.159 fused_ordering(795) 00:11:35.159 fused_ordering(796) 00:11:35.159 fused_ordering(797) 00:11:35.159 fused_ordering(798) 00:11:35.159 fused_ordering(799) 00:11:35.159 fused_ordering(800) 00:11:35.159 fused_ordering(801) 00:11:35.159 fused_ordering(802) 00:11:35.159 fused_ordering(803) 00:11:35.159 fused_ordering(804) 00:11:35.159 fused_ordering(805) 00:11:35.159 fused_ordering(806) 00:11:35.159 fused_ordering(807) 00:11:35.159 fused_ordering(808) 00:11:35.159 fused_ordering(809) 00:11:35.159 fused_ordering(810) 00:11:35.159 fused_ordering(811) 00:11:35.159 fused_ordering(812) 00:11:35.159 fused_ordering(813) 00:11:35.159 fused_ordering(814) 00:11:35.159 fused_ordering(815) 00:11:35.159 fused_ordering(816) 00:11:35.159 fused_ordering(817) 00:11:35.159 fused_ordering(818) 00:11:35.159 fused_ordering(819) 00:11:35.159 fused_ordering(820) 00:11:35.418 fused_ordering(821) 00:11:35.418 fused_ordering(822) 00:11:35.418 fused_ordering(823) 00:11:35.418 fused_ordering(824) 00:11:35.418 fused_ordering(825) 00:11:35.418 fused_ordering(826) 00:11:35.418 fused_ordering(827) 00:11:35.418 fused_ordering(828) 00:11:35.418 fused_ordering(829) 00:11:35.418 fused_ordering(830) 00:11:35.418 fused_ordering(831) 00:11:35.418 fused_ordering(832) 00:11:35.418 fused_ordering(833) 00:11:35.418 fused_ordering(834) 00:11:35.418 fused_ordering(835) 00:11:35.418 fused_ordering(836) 00:11:35.418 fused_ordering(837) 00:11:35.418 fused_ordering(838) 00:11:35.418 fused_ordering(839) 00:11:35.418 fused_ordering(840) 00:11:35.418 fused_ordering(841) 00:11:35.418 fused_ordering(842) 00:11:35.418 fused_ordering(843) 00:11:35.418 fused_ordering(844) 00:11:35.418 fused_ordering(845) 00:11:35.418 fused_ordering(846) 00:11:35.418 fused_ordering(847) 00:11:35.418 fused_ordering(848) 00:11:35.418 fused_ordering(849) 00:11:35.418 fused_ordering(850) 00:11:35.418 fused_ordering(851) 00:11:35.418 fused_ordering(852) 00:11:35.418 fused_ordering(853) 00:11:35.418 fused_ordering(854) 00:11:35.418 fused_ordering(855) 00:11:35.418 fused_ordering(856) 00:11:35.418 fused_ordering(857) 00:11:35.418 fused_ordering(858) 00:11:35.418 fused_ordering(859) 00:11:35.418 fused_ordering(860) 00:11:35.418 fused_ordering(861) 00:11:35.418 fused_ordering(862) 00:11:35.418 fused_ordering(863) 00:11:35.418 fused_ordering(864) 00:11:35.418 fused_ordering(865) 00:11:35.418 fused_ordering(866) 00:11:35.418 fused_ordering(867) 00:11:35.418 fused_ordering(868) 00:11:35.418 fused_ordering(869) 00:11:35.418 fused_ordering(870) 00:11:35.418 fused_ordering(871) 00:11:35.418 fused_ordering(872) 00:11:35.418 fused_ordering(873) 00:11:35.418 fused_ordering(874) 00:11:35.418 fused_ordering(875) 00:11:35.418 fused_ordering(876) 00:11:35.418 fused_ordering(877) 00:11:35.418 fused_ordering(878) 00:11:35.418 fused_ordering(879) 00:11:35.418 fused_ordering(880) 00:11:35.418 fused_ordering(881) 00:11:35.418 fused_ordering(882) 00:11:35.418 fused_ordering(883) 00:11:35.418 fused_ordering(884) 00:11:35.418 fused_ordering(885) 00:11:35.418 fused_ordering(886) 00:11:35.418 fused_ordering(887) 00:11:35.418 fused_ordering(888) 00:11:35.418 fused_ordering(889) 00:11:35.418 fused_ordering(890) 00:11:35.418 fused_ordering(891) 00:11:35.418 fused_ordering(892) 00:11:35.418 fused_ordering(893) 00:11:35.418 fused_ordering(894) 00:11:35.418 fused_ordering(895) 00:11:35.418 fused_ordering(896) 00:11:35.418 fused_ordering(897) 00:11:35.418 fused_ordering(898) 00:11:35.418 fused_ordering(899) 00:11:35.418 fused_ordering(900) 00:11:35.418 fused_ordering(901) 00:11:35.418 fused_ordering(902) 00:11:35.418 fused_ordering(903) 00:11:35.418 fused_ordering(904) 00:11:35.418 fused_ordering(905) 00:11:35.418 fused_ordering(906) 00:11:35.418 fused_ordering(907) 00:11:35.418 fused_ordering(908) 00:11:35.418 fused_ordering(909) 00:11:35.418 fused_ordering(910) 00:11:35.418 fused_ordering(911) 00:11:35.418 fused_ordering(912) 00:11:35.418 fused_ordering(913) 00:11:35.418 fused_ordering(914) 00:11:35.418 fused_ordering(915) 00:11:35.418 fused_ordering(916) 00:11:35.418 fused_ordering(917) 00:11:35.418 fused_ordering(918) 00:11:35.418 fused_ordering(919) 00:11:35.418 fused_ordering(920) 00:11:35.418 fused_ordering(921) 00:11:35.418 fused_ordering(922) 00:11:35.418 fused_ordering(923) 00:11:35.418 fused_ordering(924) 00:11:35.418 fused_ordering(925) 00:11:35.418 fused_ordering(926) 00:11:35.418 fused_ordering(927) 00:11:35.418 fused_ordering(928) 00:11:35.418 fused_ordering(929) 00:11:35.418 fused_ordering(930) 00:11:35.418 fused_ordering(931) 00:11:35.418 fused_ordering(932) 00:11:35.418 fused_ordering(933) 00:11:35.418 fused_ordering(934) 00:11:35.418 fused_ordering(935) 00:11:35.418 fused_ordering(936) 00:11:35.418 fused_ordering(937) 00:11:35.418 fused_ordering(938) 00:11:35.418 fused_ordering(939) 00:11:35.418 fused_ordering(940) 00:11:35.418 fused_ordering(941) 00:11:35.418 fused_ordering(942) 00:11:35.418 fused_ordering(943) 00:11:35.418 fused_ordering(944) 00:11:35.418 fused_ordering(945) 00:11:35.418 fused_ordering(946) 00:11:35.418 fused_ordering(947) 00:11:35.418 fused_ordering(948) 00:11:35.418 fused_ordering(949) 00:11:35.418 fused_ordering(950) 00:11:35.418 fused_ordering(951) 00:11:35.418 fused_ordering(952) 00:11:35.418 fused_ordering(953) 00:11:35.418 fused_ordering(954) 00:11:35.418 fused_ordering(955) 00:11:35.418 fused_ordering(956) 00:11:35.418 fused_ordering(957) 00:11:35.418 fused_ordering(958) 00:11:35.418 fused_ordering(959) 00:11:35.418 fused_ordering(960) 00:11:35.418 fused_ordering(961) 00:11:35.418 fused_ordering(962) 00:11:35.418 fused_ordering(963) 00:11:35.418 fused_ordering(964) 00:11:35.418 fused_ordering(965) 00:11:35.418 fused_ordering(966) 00:11:35.418 fused_ordering(967) 00:11:35.418 fused_ordering(968) 00:11:35.418 fused_ordering(969) 00:11:35.418 fused_ordering(970) 00:11:35.418 fused_ordering(971) 00:11:35.418 fused_ordering(972) 00:11:35.418 fused_ordering(973) 00:11:35.418 fused_ordering(974) 00:11:35.418 fused_ordering(975) 00:11:35.418 fused_ordering(976) 00:11:35.418 fused_ordering(977) 00:11:35.418 fused_ordering(978) 00:11:35.418 fused_ordering(979) 00:11:35.418 fused_ordering(980) 00:11:35.418 fused_ordering(981) 00:11:35.418 fused_ordering(982) 00:11:35.418 fused_ordering(983) 00:11:35.418 fused_ordering(984) 00:11:35.418 fused_ordering(985) 00:11:35.418 fused_ordering(986) 00:11:35.418 fused_ordering(987) 00:11:35.418 fused_ordering(988) 00:11:35.418 fused_ordering(989) 00:11:35.418 fused_ordering(990) 00:11:35.418 fused_ordering(991) 00:11:35.418 fused_ordering(992) 00:11:35.418 fused_ordering(993) 00:11:35.418 fused_ordering(994) 00:11:35.418 fused_ordering(995) 00:11:35.418 fused_ordering(996) 00:11:35.418 fused_ordering(997) 00:11:35.418 fused_ordering(998) 00:11:35.418 fused_ordering(999) 00:11:35.418 fused_ordering(1000) 00:11:35.418 fused_ordering(1001) 00:11:35.418 fused_ordering(1002) 00:11:35.418 fused_ordering(1003) 00:11:35.418 fused_ordering(1004) 00:11:35.418 fused_ordering(1005) 00:11:35.418 fused_ordering(1006) 00:11:35.418 fused_ordering(1007) 00:11:35.418 fused_ordering(1008) 00:11:35.418 fused_ordering(1009) 00:11:35.418 fused_ordering(1010) 00:11:35.418 fused_ordering(1011) 00:11:35.418 fused_ordering(1012) 00:11:35.418 fused_ordering(1013) 00:11:35.418 fused_ordering(1014) 00:11:35.418 fused_ordering(1015) 00:11:35.418 fused_ordering(1016) 00:11:35.418 fused_ordering(1017) 00:11:35.418 fused_ordering(1018) 00:11:35.418 fused_ordering(1019) 00:11:35.418 fused_ordering(1020) 00:11:35.418 fused_ordering(1021) 00:11:35.418 fused_ordering(1022) 00:11:35.418 fused_ordering(1023) 00:11:35.418 16:22:44 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:35.418 16:22:44 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:35.418 16:22:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:35.418 16:22:44 -- nvmf/common.sh@117 -- # sync 00:11:35.418 16:22:44 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:35.418 16:22:44 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:35.418 16:22:44 -- nvmf/common.sh@120 -- # set +e 00:11:35.418 16:22:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:35.418 16:22:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:35.418 rmmod nvme_rdma 00:11:35.418 rmmod nvme_fabrics 00:11:35.418 16:22:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:35.418 16:22:44 -- nvmf/common.sh@124 -- # set -e 00:11:35.418 16:22:44 -- nvmf/common.sh@125 -- # return 0 00:11:35.418 16:22:44 -- nvmf/common.sh@478 -- # '[' -n 410354 ']' 00:11:35.418 16:22:44 -- nvmf/common.sh@479 -- # killprocess 410354 00:11:35.418 16:22:44 -- common/autotest_common.sh@936 -- # '[' -z 410354 ']' 00:11:35.418 16:22:44 -- common/autotest_common.sh@940 -- # kill -0 410354 00:11:35.418 16:22:44 -- common/autotest_common.sh@941 -- # uname 00:11:35.418 16:22:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:35.418 16:22:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 410354 00:11:35.418 16:22:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:35.418 16:22:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:35.418 16:22:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 410354' 00:11:35.418 killing process with pid 410354 00:11:35.418 16:22:44 -- common/autotest_common.sh@955 -- # kill 410354 00:11:35.418 16:22:44 -- common/autotest_common.sh@960 -- # wait 410354 00:11:35.418 [2024-04-26 16:22:44.325602] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:35.676 16:22:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:35.676 16:22:44 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:35.676 00:11:35.676 real 0m8.176s 00:11:35.676 user 0m4.557s 00:11:35.676 sys 0m4.960s 00:11:35.676 16:22:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.676 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:11:35.676 ************************************ 00:11:35.676 END TEST nvmf_fused_ordering 00:11:35.676 ************************************ 00:11:35.676 16:22:44 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:35.676 16:22:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:35.676 16:22:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.676 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:11:35.935 ************************************ 00:11:35.935 START TEST nvmf_delete_subsystem 00:11:35.935 ************************************ 00:11:35.935 16:22:44 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:35.935 * Looking for test storage... 00:11:35.935 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:35.935 16:22:44 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.935 16:22:44 -- nvmf/common.sh@7 -- # uname -s 00:11:35.935 16:22:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.935 16:22:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.935 16:22:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.935 16:22:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.935 16:22:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.935 16:22:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.935 16:22:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.935 16:22:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.935 16:22:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.935 16:22:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.935 16:22:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:11:35.935 16:22:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:11:35.935 16:22:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.935 16:22:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.935 16:22:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.935 16:22:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.935 16:22:44 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:35.935 16:22:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.935 16:22:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.935 16:22:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.935 16:22:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.935 16:22:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.935 16:22:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.935 16:22:44 -- paths/export.sh@5 -- # export PATH 00:11:35.935 16:22:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.935 16:22:44 -- nvmf/common.sh@47 -- # : 0 00:11:35.935 16:22:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.935 16:22:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.935 16:22:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.935 16:22:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.935 16:22:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.935 16:22:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.935 16:22:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.935 16:22:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.935 16:22:44 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:35.935 16:22:44 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:35.935 16:22:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.935 16:22:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:35.935 16:22:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:35.935 16:22:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:35.935 16:22:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.935 16:22:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.935 16:22:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.935 16:22:44 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:35.935 16:22:44 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:35.935 16:22:44 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.935 16:22:44 -- common/autotest_common.sh@10 -- # set +x 00:11:41.201 16:22:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:41.201 16:22:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:41.201 16:22:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:41.201 16:22:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:41.201 16:22:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:41.201 16:22:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:41.201 16:22:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:41.201 16:22:50 -- nvmf/common.sh@295 -- # net_devs=() 00:11:41.201 16:22:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:41.201 16:22:50 -- nvmf/common.sh@296 -- # e810=() 00:11:41.201 16:22:50 -- nvmf/common.sh@296 -- # local -ga e810 00:11:41.201 16:22:50 -- nvmf/common.sh@297 -- # x722=() 00:11:41.201 16:22:50 -- nvmf/common.sh@297 -- # local -ga x722 00:11:41.201 16:22:50 -- nvmf/common.sh@298 -- # mlx=() 00:11:41.201 16:22:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:41.201 16:22:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.201 16:22:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:41.201 16:22:50 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:41.201 16:22:50 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:41.201 16:22:50 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:41.201 16:22:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:41.201 16:22:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:11:41.201 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:11:41.201 16:22:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.201 16:22:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:11:41.201 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:11:41.201 16:22:50 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.201 16:22:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:41.201 16:22:50 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.201 16:22:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:41.201 16:22:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.201 16:22:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:41.201 Found net devices under 0000:18:00.0: mlx_0_0 00:11:41.201 16:22:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.201 16:22:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.201 16:22:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:41.201 16:22:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.201 16:22:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:41.201 Found net devices under 0000:18:00.1: mlx_0_1 00:11:41.201 16:22:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.201 16:22:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:41.201 16:22:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:41.201 16:22:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@409 -- # rdma_device_init 00:11:41.201 16:22:50 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:11:41.201 16:22:50 -- nvmf/common.sh@58 -- # uname 00:11:41.201 16:22:50 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:41.201 16:22:50 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:41.201 16:22:50 -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:41.201 16:22:50 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:41.201 16:22:50 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:41.201 16:22:50 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:41.201 16:22:50 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:41.201 16:22:50 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:41.201 16:22:50 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:11:41.201 16:22:50 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:41.201 16:22:50 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:41.201 16:22:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.201 16:22:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:41.201 16:22:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:41.201 16:22:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.201 16:22:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:41.201 16:22:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:41.201 16:22:50 -- nvmf/common.sh@105 -- # continue 2 00:11:41.201 16:22:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.201 16:22:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:41.201 16:22:50 -- nvmf/common.sh@105 -- # continue 2 00:11:41.201 16:22:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:41.201 16:22:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:41.201 16:22:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:41.201 16:22:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:41.201 16:22:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.201 16:22:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.201 16:22:50 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:41.201 16:22:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:41.201 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.201 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:11:41.201 altname enp24s0f0np0 00:11:41.201 altname ens785f0np0 00:11:41.201 inet 192.168.100.8/24 scope global mlx_0_0 00:11:41.201 valid_lft forever preferred_lft forever 00:11:41.201 16:22:50 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:41.201 16:22:50 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:41.201 16:22:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:41.201 16:22:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:41.201 16:22:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.201 16:22:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.201 16:22:50 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:41.201 16:22:50 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:41.201 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.201 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:11:41.201 altname enp24s0f1np1 00:11:41.201 altname ens785f1np1 00:11:41.201 inet 192.168.100.9/24 scope global mlx_0_1 00:11:41.201 valid_lft forever preferred_lft forever 00:11:41.201 16:22:50 -- nvmf/common.sh@411 -- # return 0 00:11:41.201 16:22:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:41.201 16:22:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:41.201 16:22:50 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:11:41.201 16:22:50 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:11:41.201 16:22:50 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:41.201 16:22:50 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.201 16:22:50 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:41.201 16:22:50 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:41.201 16:22:50 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.201 16:22:50 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:41.202 16:22:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.202 16:22:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.202 16:22:50 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.202 16:22:50 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:41.202 16:22:50 -- nvmf/common.sh@105 -- # continue 2 00:11:41.202 16:22:50 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:41.202 16:22:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.202 16:22:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.202 16:22:50 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.202 16:22:50 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.202 16:22:50 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:41.202 16:22:50 -- nvmf/common.sh@105 -- # continue 2 00:11:41.460 16:22:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:41.460 16:22:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:41.460 16:22:50 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:41.460 16:22:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:41.460 16:22:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.460 16:22:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.460 16:22:50 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:41.460 16:22:50 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:41.460 16:22:50 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:41.460 16:22:50 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:41.460 16:22:50 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:41.460 16:22:50 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:41.460 16:22:50 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:11:41.460 192.168.100.9' 00:11:41.460 16:22:50 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:11:41.461 192.168.100.9' 00:11:41.461 16:22:50 -- nvmf/common.sh@446 -- # head -n 1 00:11:41.461 16:22:50 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:41.461 16:22:50 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:11:41.461 192.168.100.9' 00:11:41.461 16:22:50 -- nvmf/common.sh@447 -- # tail -n +2 00:11:41.461 16:22:50 -- nvmf/common.sh@447 -- # head -n 1 00:11:41.461 16:22:50 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:41.461 16:22:50 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:11:41.461 16:22:50 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:41.461 16:22:50 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:11:41.461 16:22:50 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:11:41.461 16:22:50 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:11:41.461 16:22:50 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:41.461 16:22:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:41.461 16:22:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:41.461 16:22:50 -- common/autotest_common.sh@10 -- # set +x 00:11:41.461 16:22:50 -- nvmf/common.sh@470 -- # nvmfpid=413326 00:11:41.461 16:22:50 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:41.461 16:22:50 -- nvmf/common.sh@471 -- # waitforlisten 413326 00:11:41.461 16:22:50 -- common/autotest_common.sh@817 -- # '[' -z 413326 ']' 00:11:41.461 16:22:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.461 16:22:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:41.461 16:22:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.461 16:22:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:41.461 16:22:50 -- common/autotest_common.sh@10 -- # set +x 00:11:41.461 [2024-04-26 16:22:50.345392] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:11:41.461 [2024-04-26 16:22:50.345446] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.461 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.461 [2024-04-26 16:22:50.417665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.720 [2024-04-26 16:22:50.498823] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.720 [2024-04-26 16:22:50.498860] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.720 [2024-04-26 16:22:50.498869] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.720 [2024-04-26 16:22:50.498893] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.720 [2024-04-26 16:22:50.498900] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.720 [2024-04-26 16:22:50.498945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.720 [2024-04-26 16:22:50.498948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.288 16:22:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:42.288 16:22:51 -- common/autotest_common.sh@850 -- # return 0 00:11:42.288 16:22:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:42.288 16:22:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:42.288 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.288 16:22:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.288 16:22:51 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:42.288 16:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.288 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.288 [2024-04-26 16:22:51.210437] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16fac90/0x16ff180) succeed. 00:11:42.288 [2024-04-26 16:22:51.219420] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16fc190/0x1740810) succeed. 00:11:42.288 16:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.288 16:22:51 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:42.288 16:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.288 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.288 16:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.288 16:22:51 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.288 16:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.288 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.288 [2024-04-26 16:22:51.310254] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.547 16:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.547 16:22:51 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:42.547 16:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.547 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.547 NULL1 00:11:42.547 16:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.547 16:22:51 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:42.547 16:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.547 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.547 Delay0 00:11:42.547 16:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.547 16:22:51 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.547 16:22:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:42.547 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:11:42.547 16:22:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:42.547 16:22:51 -- target/delete_subsystem.sh@28 -- # perf_pid=413523 00:11:42.547 16:22:51 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:42.547 16:22:51 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:42.547 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.547 [2024-04-26 16:22:51.412934] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:44.449 16:22:53 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.449 16:22:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:44.449 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:11:45.884 NVMe io qpair process completion error 00:11:45.884 NVMe io qpair process completion error 00:11:45.884 NVMe io qpair process completion error 00:11:45.884 NVMe io qpair process completion error 00:11:45.884 NVMe io qpair process completion error 00:11:45.884 NVMe io qpair process completion error 00:11:45.884 16:22:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:45.884 16:22:54 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:45.884 16:22:54 -- target/delete_subsystem.sh@35 -- # kill -0 413523 00:11:45.884 16:22:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:46.174 16:22:54 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:46.174 16:22:54 -- target/delete_subsystem.sh@35 -- # kill -0 413523 00:11:46.174 16:22:54 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:46.445 Read completed with error (sct=0, sc=8) 00:11:46.445 starting I/O failed: -6 00:11:46.445 Write completed with error (sct=0, sc=8) 00:11:46.445 starting I/O failed: -6 00:11:46.445 Write completed with error (sct=0, sc=8) 00:11:46.445 starting I/O failed: -6 00:11:46.445 Read completed with error (sct=0, sc=8) 00:11:46.445 starting I/O failed: -6 00:11:46.445 Write completed with error (sct=0, sc=8) 00:11:46.445 starting I/O failed: -6 00:11:46.445 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 starting I/O failed: -6 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Write completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.446 Read completed with error (sct=0, sc=8) 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Write completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Write completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Write completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Write completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Write completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Write completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Read completed with error (sct=0, sc=8) 00:11:46.751 starting I/O failed: -6 00:11:46.751 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 starting I/O failed: -6 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Write completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 Read completed with error (sct=0, sc=8) 00:11:46.752 16:22:55 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:46.752 16:22:55 -- target/delete_subsystem.sh@35 -- # kill -0 413523 00:11:46.752 16:22:55 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:46.752 [2024-04-26 16:22:55.507506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:46.752 [2024-04-26 16:22:55.507558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:46.752 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:46.752 Initializing NVMe Controllers 00:11:46.752 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:46.752 Controller IO queue size 128, less than required. 00:11:46.752 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:46.752 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:46.752 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:46.752 Initialization complete. Launching workers. 00:11:46.752 ======================================================== 00:11:46.752 Latency(us) 00:11:46.752 Device Information : IOPS MiB/s Average min max 00:11:46.752 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.38 0.04 1595347.67 1000124.57 2980899.26 00:11:46.752 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.38 0.04 1596529.03 1001125.90 2981896.71 00:11:46.752 ======================================================== 00:11:46.752 Total : 160.75 0.08 1595938.35 1000124.57 2981896.71 00:11:46.752 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@35 -- # kill -0 413523 00:11:47.053 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (413523) - No such process 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@45 -- # NOT wait 413523 00:11:47.053 16:22:56 -- common/autotest_common.sh@638 -- # local es=0 00:11:47.053 16:22:56 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 413523 00:11:47.053 16:22:56 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:47.053 16:22:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:47.053 16:22:56 -- common/autotest_common.sh@630 -- # type -t wait 00:11:47.053 16:22:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:47.053 16:22:56 -- common/autotest_common.sh@641 -- # wait 413523 00:11:47.053 16:22:56 -- common/autotest_common.sh@641 -- # es=1 00:11:47.053 16:22:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:47.053 16:22:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:47.053 16:22:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:47.053 16:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.053 16:22:56 -- common/autotest_common.sh@10 -- # set +x 00:11:47.053 16:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:47.053 16:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.053 16:22:56 -- common/autotest_common.sh@10 -- # set +x 00:11:47.053 [2024-04-26 16:22:56.027158] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:47.053 16:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.053 16:22:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:47.053 16:22:56 -- common/autotest_common.sh@10 -- # set +x 00:11:47.053 16:22:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@54 -- # perf_pid=414178 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:47.053 16:22:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:47.333 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.333 [2024-04-26 16:22:56.107059] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:47.620 16:22:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:47.620 16:22:56 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:47.620 16:22:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.272 16:22:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.272 16:22:57 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:48.272 16:22:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:48.543 16:22:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:48.543 16:22:57 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:48.543 16:22:57 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.114 16:22:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.114 16:22:58 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:49.114 16:22:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:49.681 16:22:58 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:49.681 16:22:58 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:49.681 16:22:58 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:50.247 16:22:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.247 16:22:59 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:50.247 16:22:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:50.816 16:22:59 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:50.816 16:22:59 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:50.816 16:22:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:51.074 16:23:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:51.074 16:23:00 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:51.074 16:23:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:51.641 16:23:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:51.641 16:23:00 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:51.641 16:23:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:52.207 16:23:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:52.207 16:23:01 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:52.207 16:23:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:52.774 16:23:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:52.774 16:23:01 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:52.774 16:23:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:53.341 16:23:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:53.341 16:23:02 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:53.341 16:23:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:53.600 16:23:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:53.600 16:23:02 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:53.600 16:23:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:54.179 16:23:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:54.179 16:23:03 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:54.179 16:23:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:54.436 Initializing NVMe Controllers 00:11:54.436 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.436 Controller IO queue size 128, less than required. 00:11:54.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:54.436 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:54.436 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:54.436 Initialization complete. Launching workers. 00:11:54.436 ======================================================== 00:11:54.436 Latency(us) 00:11:54.436 Device Information : IOPS MiB/s Average min max 00:11:54.436 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001524.20 1000047.32 1004829.76 00:11:54.436 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002263.23 1000090.70 1005967.16 00:11:54.436 ======================================================== 00:11:54.436 Total : 256.00 0.12 1001893.72 1000047.32 1005967.16 00:11:54.436 00:11:54.694 16:23:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:54.694 16:23:03 -- target/delete_subsystem.sh@57 -- # kill -0 414178 00:11:54.694 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (414178) - No such process 00:11:54.694 16:23:03 -- target/delete_subsystem.sh@67 -- # wait 414178 00:11:54.694 16:23:03 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:54.694 16:23:03 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:54.694 16:23:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:54.694 16:23:03 -- nvmf/common.sh@117 -- # sync 00:11:54.694 16:23:03 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:54.694 16:23:03 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:54.694 16:23:03 -- nvmf/common.sh@120 -- # set +e 00:11:54.694 16:23:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:54.694 16:23:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:54.694 rmmod nvme_rdma 00:11:54.694 rmmod nvme_fabrics 00:11:54.694 16:23:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:54.694 16:23:03 -- nvmf/common.sh@124 -- # set -e 00:11:54.694 16:23:03 -- nvmf/common.sh@125 -- # return 0 00:11:54.694 16:23:03 -- nvmf/common.sh@478 -- # '[' -n 413326 ']' 00:11:54.694 16:23:03 -- nvmf/common.sh@479 -- # killprocess 413326 00:11:54.694 16:23:03 -- common/autotest_common.sh@936 -- # '[' -z 413326 ']' 00:11:54.694 16:23:03 -- common/autotest_common.sh@940 -- # kill -0 413326 00:11:54.694 16:23:03 -- common/autotest_common.sh@941 -- # uname 00:11:54.694 16:23:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:54.694 16:23:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 413326 00:11:54.952 16:23:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:54.952 16:23:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:54.952 16:23:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 413326' 00:11:54.952 killing process with pid 413326 00:11:54.952 16:23:03 -- common/autotest_common.sh@955 -- # kill 413326 00:11:54.952 16:23:03 -- common/autotest_common.sh@960 -- # wait 413326 00:11:54.952 [2024-04-26 16:23:03.784292] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:11:55.210 16:23:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:55.210 16:23:04 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:11:55.210 00:11:55.210 real 0m19.281s 00:11:55.210 user 0m49.657s 00:11:55.210 sys 0m5.386s 00:11:55.210 16:23:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:55.210 16:23:04 -- common/autotest_common.sh@10 -- # set +x 00:11:55.210 ************************************ 00:11:55.210 END TEST nvmf_delete_subsystem 00:11:55.210 ************************************ 00:11:55.210 16:23:04 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:55.210 16:23:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:55.210 16:23:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:55.210 16:23:04 -- common/autotest_common.sh@10 -- # set +x 00:11:55.210 ************************************ 00:11:55.210 START TEST nvmf_ns_masking 00:11:55.210 ************************************ 00:11:55.210 16:23:04 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:55.470 * Looking for test storage... 00:11:55.470 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:55.470 16:23:04 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.470 16:23:04 -- nvmf/common.sh@7 -- # uname -s 00:11:55.470 16:23:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.470 16:23:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.470 16:23:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.470 16:23:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.470 16:23:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.470 16:23:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.470 16:23:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.470 16:23:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.470 16:23:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.470 16:23:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.470 16:23:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:11:55.470 16:23:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:11:55.470 16:23:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.470 16:23:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.470 16:23:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.470 16:23:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.470 16:23:04 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:55.470 16:23:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.471 16:23:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.471 16:23:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.471 16:23:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.471 16:23:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.471 16:23:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.471 16:23:04 -- paths/export.sh@5 -- # export PATH 00:11:55.471 16:23:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.471 16:23:04 -- nvmf/common.sh@47 -- # : 0 00:11:55.471 16:23:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.471 16:23:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.471 16:23:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.471 16:23:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.471 16:23:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.471 16:23:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.471 16:23:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.471 16:23:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.471 16:23:04 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:55.471 16:23:04 -- target/ns_masking.sh@11 -- # loops=5 00:11:55.471 16:23:04 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:55.471 16:23:04 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:55.471 16:23:04 -- target/ns_masking.sh@15 -- # uuidgen 00:11:55.471 16:23:04 -- target/ns_masking.sh@15 -- # HOSTID=37f29096-2cf6-45f7-b42f-d616693d8a7c 00:11:55.471 16:23:04 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:55.471 16:23:04 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:11:55.471 16:23:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.471 16:23:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:55.471 16:23:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:55.471 16:23:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:55.471 16:23:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.471 16:23:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.472 16:23:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.472 16:23:04 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:55.472 16:23:04 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:55.472 16:23:04 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.472 16:23:04 -- common/autotest_common.sh@10 -- # set +x 00:12:02.049 16:23:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:02.049 16:23:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:02.049 16:23:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:02.049 16:23:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:02.049 16:23:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:02.049 16:23:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:02.049 16:23:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:02.049 16:23:10 -- nvmf/common.sh@295 -- # net_devs=() 00:12:02.049 16:23:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:02.049 16:23:10 -- nvmf/common.sh@296 -- # e810=() 00:12:02.049 16:23:10 -- nvmf/common.sh@296 -- # local -ga e810 00:12:02.049 16:23:10 -- nvmf/common.sh@297 -- # x722=() 00:12:02.049 16:23:10 -- nvmf/common.sh@297 -- # local -ga x722 00:12:02.049 16:23:10 -- nvmf/common.sh@298 -- # mlx=() 00:12:02.049 16:23:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:02.049 16:23:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.049 16:23:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:02.049 16:23:10 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:02.049 16:23:10 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:02.049 16:23:10 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:02.049 16:23:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:02.049 16:23:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.049 16:23:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:12:02.049 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:12:02.049 16:23:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:02.049 16:23:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.049 16:23:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:12:02.049 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:12:02.049 16:23:10 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:02.049 16:23:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:02.049 16:23:10 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.049 16:23:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.049 16:23:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:02.049 16:23:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.049 16:23:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:02.049 Found net devices under 0000:18:00.0: mlx_0_0 00:12:02.049 16:23:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.049 16:23:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.049 16:23:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.049 16:23:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:02.049 16:23:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.049 16:23:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:02.049 Found net devices under 0000:18:00.1: mlx_0_1 00:12:02.049 16:23:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.049 16:23:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:02.049 16:23:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:02.049 16:23:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:02.049 16:23:10 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:02.049 16:23:10 -- nvmf/common.sh@58 -- # uname 00:12:02.049 16:23:10 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:02.049 16:23:10 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:02.049 16:23:10 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:02.049 16:23:10 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:02.049 16:23:10 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:02.049 16:23:10 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:02.049 16:23:10 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:02.049 16:23:10 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:02.049 16:23:10 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:02.049 16:23:10 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:02.049 16:23:10 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:02.049 16:23:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:02.049 16:23:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:02.049 16:23:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:02.049 16:23:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:02.049 16:23:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:02.049 16:23:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.049 16:23:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.049 16:23:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:02.049 16:23:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:02.049 16:23:10 -- nvmf/common.sh@105 -- # continue 2 00:12:02.050 16:23:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.050 16:23:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.050 16:23:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:02.050 16:23:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.050 16:23:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:02.050 16:23:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:02.050 16:23:10 -- nvmf/common.sh@105 -- # continue 2 00:12:02.050 16:23:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:02.050 16:23:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:02.050 16:23:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.050 16:23:10 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:02.050 16:23:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:02.050 16:23:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:02.050 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:02.050 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:12:02.050 altname enp24s0f0np0 00:12:02.050 altname ens785f0np0 00:12:02.050 inet 192.168.100.8/24 scope global mlx_0_0 00:12:02.050 valid_lft forever preferred_lft forever 00:12:02.050 16:23:10 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:02.050 16:23:10 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:02.050 16:23:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.050 16:23:10 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:02.050 16:23:10 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:02.050 16:23:10 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:02.050 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:02.050 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:12:02.050 altname enp24s0f1np1 00:12:02.050 altname ens785f1np1 00:12:02.050 inet 192.168.100.9/24 scope global mlx_0_1 00:12:02.050 valid_lft forever preferred_lft forever 00:12:02.050 16:23:10 -- nvmf/common.sh@411 -- # return 0 00:12:02.050 16:23:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:02.050 16:23:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:02.050 16:23:10 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:02.050 16:23:10 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:02.050 16:23:10 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:02.050 16:23:10 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:02.050 16:23:10 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:02.050 16:23:10 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:02.050 16:23:10 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:02.050 16:23:10 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:02.050 16:23:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.050 16:23:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.050 16:23:10 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:02.050 16:23:10 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:02.050 16:23:10 -- nvmf/common.sh@105 -- # continue 2 00:12:02.050 16:23:10 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.050 16:23:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.050 16:23:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:02.050 16:23:10 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.050 16:23:10 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:02.050 16:23:10 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:02.050 16:23:10 -- nvmf/common.sh@105 -- # continue 2 00:12:02.050 16:23:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:02.050 16:23:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:02.050 16:23:10 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.050 16:23:10 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:02.050 16:23:10 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:02.050 16:23:10 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.050 16:23:10 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.050 16:23:10 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:02.050 192.168.100.9' 00:12:02.050 16:23:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:02.050 192.168.100.9' 00:12:02.050 16:23:10 -- nvmf/common.sh@446 -- # head -n 1 00:12:02.050 16:23:10 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:02.050 16:23:10 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:02.050 192.168.100.9' 00:12:02.050 16:23:10 -- nvmf/common.sh@447 -- # tail -n +2 00:12:02.050 16:23:10 -- nvmf/common.sh@447 -- # head -n 1 00:12:02.050 16:23:10 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:02.050 16:23:10 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:02.050 16:23:10 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:02.050 16:23:10 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:02.050 16:23:10 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:02.050 16:23:10 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:02.050 16:23:10 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:02.050 16:23:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:02.050 16:23:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:02.050 16:23:10 -- common/autotest_common.sh@10 -- # set +x 00:12:02.050 16:23:10 -- nvmf/common.sh@470 -- # nvmfpid=418135 00:12:02.050 16:23:10 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:02.050 16:23:10 -- nvmf/common.sh@471 -- # waitforlisten 418135 00:12:02.050 16:23:10 -- common/autotest_common.sh@817 -- # '[' -z 418135 ']' 00:12:02.050 16:23:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.050 16:23:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:02.050 16:23:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.050 16:23:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:02.050 16:23:10 -- common/autotest_common.sh@10 -- # set +x 00:12:02.050 [2024-04-26 16:23:10.343958] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:12:02.050 [2024-04-26 16:23:10.344013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.050 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.050 [2024-04-26 16:23:10.420413] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.050 [2024-04-26 16:23:10.504014] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.050 [2024-04-26 16:23:10.504055] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.050 [2024-04-26 16:23:10.504065] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.050 [2024-04-26 16:23:10.504074] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.050 [2024-04-26 16:23:10.504082] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.050 [2024-04-26 16:23:10.504142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.050 [2024-04-26 16:23:10.504155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.050 [2024-04-26 16:23:10.504219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.050 [2024-04-26 16:23:10.504221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.309 16:23:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:02.309 16:23:11 -- common/autotest_common.sh@850 -- # return 0 00:12:02.309 16:23:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:02.309 16:23:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:02.309 16:23:11 -- common/autotest_common.sh@10 -- # set +x 00:12:02.309 16:23:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.309 16:23:11 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:02.568 [2024-04-26 16:23:11.392109] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbdc310/0xbe0800) succeed. 00:12:02.568 [2024-04-26 16:23:11.402366] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbdd950/0xc21e90) succeed. 00:12:02.568 16:23:11 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:02.568 16:23:11 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:02.568 16:23:11 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:02.827 Malloc1 00:12:02.827 16:23:11 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:03.086 Malloc2 00:12:03.086 16:23:11 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:03.086 16:23:12 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:03.345 16:23:12 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:03.604 [2024-04-26 16:23:12.466912] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:03.604 16:23:12 -- target/ns_masking.sh@61 -- # connect 00:12:03.604 16:23:12 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37f29096-2cf6-45f7-b42f-d616693d8a7c -a 192.168.100.8 -s 4420 -i 4 00:12:04.171 16:23:12 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:04.171 16:23:12 -- common/autotest_common.sh@1184 -- # local i=0 00:12:04.171 16:23:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.172 16:23:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:04.172 16:23:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:06.071 16:23:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:06.071 16:23:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:06.071 16:23:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.071 16:23:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:06.071 16:23:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.071 16:23:14 -- common/autotest_common.sh@1194 -- # return 0 00:12:06.071 16:23:14 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:06.071 16:23:14 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:06.071 16:23:15 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:06.071 16:23:15 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:06.071 16:23:15 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:06.071 16:23:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.071 16:23:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:06.071 [ 0]:0x1 00:12:06.071 16:23:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.071 16:23:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.071 16:23:15 -- target/ns_masking.sh@40 -- # nguid=fb841e36ac6e49af9e5f73e323328c02 00:12:06.071 16:23:15 -- target/ns_masking.sh@41 -- # [[ fb841e36ac6e49af9e5f73e323328c02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.071 16:23:15 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:06.330 16:23:15 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:06.330 16:23:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.330 16:23:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:06.330 [ 0]:0x1 00:12:06.330 16:23:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.330 16:23:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:06.330 16:23:15 -- target/ns_masking.sh@40 -- # nguid=fb841e36ac6e49af9e5f73e323328c02 00:12:06.330 16:23:15 -- target/ns_masking.sh@41 -- # [[ fb841e36ac6e49af9e5f73e323328c02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.330 16:23:15 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:06.330 16:23:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:06.330 16:23:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:06.330 [ 1]:0x2 00:12:06.330 16:23:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:06.330 16:23:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:06.589 16:23:15 -- target/ns_masking.sh@40 -- # nguid=467354fff84840db882b129387886d02 00:12:06.589 16:23:15 -- target/ns_masking.sh@41 -- # [[ 467354fff84840db882b129387886d02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:06.589 16:23:15 -- target/ns_masking.sh@69 -- # disconnect 00:12:06.589 16:23:15 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:07.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.526 16:23:16 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.526 16:23:16 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:07.785 16:23:16 -- target/ns_masking.sh@77 -- # connect 1 00:12:07.785 16:23:16 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37f29096-2cf6-45f7-b42f-d616693d8a7c -a 192.168.100.8 -s 4420 -i 4 00:12:08.353 16:23:17 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:08.353 16:23:17 -- common/autotest_common.sh@1184 -- # local i=0 00:12:08.353 16:23:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.353 16:23:17 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:12:08.353 16:23:17 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:12:08.353 16:23:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:10.255 16:23:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:10.256 16:23:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:10.256 16:23:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.256 16:23:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:10.256 16:23:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.256 16:23:19 -- common/autotest_common.sh@1194 -- # return 0 00:12:10.256 16:23:19 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:10.256 16:23:19 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:10.256 16:23:19 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:10.256 16:23:19 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:10.256 16:23:19 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:10.256 16:23:19 -- common/autotest_common.sh@638 -- # local es=0 00:12:10.256 16:23:19 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.256 16:23:19 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:10.256 16:23:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.256 16:23:19 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:10.256 16:23:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.256 16:23:19 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:10.256 16:23:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.256 16:23:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.256 16:23:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.256 16:23:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.256 16:23:19 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:10.256 16:23:19 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.256 16:23:19 -- common/autotest_common.sh@641 -- # es=1 00:12:10.256 16:23:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:10.256 16:23:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:10.256 16:23:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:10.256 16:23:19 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:10.627 16:23:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.627 16:23:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.627 [ 0]:0x2 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # nguid=467354fff84840db882b129387886d02 00:12:10.627 16:23:19 -- target/ns_masking.sh@41 -- # [[ 467354fff84840db882b129387886d02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.627 16:23:19 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.627 16:23:19 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:10.627 16:23:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.627 16:23:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.627 [ 0]:0x1 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # nguid=fb841e36ac6e49af9e5f73e323328c02 00:12:10.627 16:23:19 -- target/ns_masking.sh@41 -- # [[ fb841e36ac6e49af9e5f73e323328c02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.627 16:23:19 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:10.627 16:23:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.627 16:23:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.627 [ 1]:0x2 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.627 16:23:19 -- target/ns_masking.sh@40 -- # nguid=467354fff84840db882b129387886d02 00:12:10.627 16:23:19 -- target/ns_masking.sh@41 -- # [[ 467354fff84840db882b129387886d02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.627 16:23:19 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:10.885 16:23:19 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:10.885 16:23:19 -- common/autotest_common.sh@638 -- # local es=0 00:12:10.885 16:23:19 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:10.885 16:23:19 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:10.885 16:23:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.885 16:23:19 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:10.885 16:23:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:10.885 16:23:19 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:10.885 16:23:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.885 16:23:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:10.885 16:23:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:10.885 16:23:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.885 16:23:19 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:10.885 16:23:19 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.885 16:23:19 -- common/autotest_common.sh@641 -- # es=1 00:12:10.885 16:23:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:10.885 16:23:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:10.885 16:23:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:10.885 16:23:19 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:10.885 16:23:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:10.885 16:23:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:10.885 [ 0]:0x2 00:12:10.885 16:23:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:10.885 16:23:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:10.885 16:23:19 -- target/ns_masking.sh@40 -- # nguid=467354fff84840db882b129387886d02 00:12:10.885 16:23:19 -- target/ns_masking.sh@41 -- # [[ 467354fff84840db882b129387886d02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:10.885 16:23:19 -- target/ns_masking.sh@91 -- # disconnect 00:12:10.885 16:23:19 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.818 16:23:20 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:12.076 16:23:20 -- target/ns_masking.sh@95 -- # connect 2 00:12:12.076 16:23:20 -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 37f29096-2cf6-45f7-b42f-d616693d8a7c -a 192.168.100.8 -s 4420 -i 4 00:12:12.644 16:23:21 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:12.644 16:23:21 -- common/autotest_common.sh@1184 -- # local i=0 00:12:12.644 16:23:21 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.644 16:23:21 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:12.644 16:23:21 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:12.644 16:23:21 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:14.548 16:23:23 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:14.548 16:23:23 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:14.548 16:23:23 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.548 16:23:23 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:14.548 16:23:23 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.548 16:23:23 -- common/autotest_common.sh@1194 -- # return 0 00:12:14.548 16:23:23 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:14.548 16:23:23 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:14.548 16:23:23 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:14.548 16:23:23 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:14.548 16:23:23 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:14.548 16:23:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:14.548 16:23:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:14.548 [ 0]:0x1 00:12:14.548 16:23:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:14.548 16:23:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:14.548 16:23:23 -- target/ns_masking.sh@40 -- # nguid=fb841e36ac6e49af9e5f73e323328c02 00:12:14.548 16:23:23 -- target/ns_masking.sh@41 -- # [[ fb841e36ac6e49af9e5f73e323328c02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.548 16:23:23 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:14.548 16:23:23 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:14.548 16:23:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:14.548 [ 1]:0x2 00:12:14.548 16:23:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:14.548 16:23:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:14.807 16:23:23 -- target/ns_masking.sh@40 -- # nguid=467354fff84840db882b129387886d02 00:12:14.807 16:23:23 -- target/ns_masking.sh@41 -- # [[ 467354fff84840db882b129387886d02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.807 16:23:23 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:14.807 16:23:23 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:14.807 16:23:23 -- common/autotest_common.sh@638 -- # local es=0 00:12:14.807 16:23:23 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:14.807 16:23:23 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:14.807 16:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:14.807 16:23:23 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:14.807 16:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:14.807 16:23:23 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:14.807 16:23:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:14.807 16:23:23 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:14.807 16:23:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:14.807 16:23:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:15.067 16:23:23 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:15.067 16:23:23 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.067 16:23:23 -- common/autotest_common.sh@641 -- # es=1 00:12:15.067 16:23:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:15.067 16:23:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:15.067 16:23:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:15.067 16:23:23 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:15.067 16:23:23 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:15.067 16:23:23 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:15.067 [ 0]:0x2 00:12:15.067 16:23:23 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.067 16:23:23 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:15.067 16:23:23 -- target/ns_masking.sh@40 -- # nguid=467354fff84840db882b129387886d02 00:12:15.067 16:23:23 -- target/ns_masking.sh@41 -- # [[ 467354fff84840db882b129387886d02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.067 16:23:23 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:15.067 16:23:23 -- common/autotest_common.sh@638 -- # local es=0 00:12:15.067 16:23:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:15.067 16:23:23 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:15.067 16:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:15.067 16:23:23 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:15.067 16:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:15.067 16:23:23 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:15.067 16:23:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:15.067 16:23:23 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:15.067 16:23:23 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:15.067 16:23:23 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:15.067 [2024-04-26 16:23:24.055551] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:15.067 request: 00:12:15.067 { 00:12:15.067 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.067 "nsid": 2, 00:12:15.067 "host": "nqn.2016-06.io.spdk:host1", 00:12:15.067 "method": "nvmf_ns_remove_host", 00:12:15.067 "req_id": 1 00:12:15.067 } 00:12:15.067 Got JSON-RPC error response 00:12:15.067 response: 00:12:15.067 { 00:12:15.067 "code": -32602, 00:12:15.067 "message": "Invalid parameters" 00:12:15.067 } 00:12:15.067 16:23:24 -- common/autotest_common.sh@641 -- # es=1 00:12:15.067 16:23:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:15.067 16:23:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:15.067 16:23:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:15.067 16:23:24 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:15.067 16:23:24 -- common/autotest_common.sh@638 -- # local es=0 00:12:15.067 16:23:24 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:12:15.067 16:23:24 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:12:15.067 16:23:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:15.067 16:23:24 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:12:15.326 16:23:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:15.326 16:23:24 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:12:15.326 16:23:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:15.326 16:23:24 -- target/ns_masking.sh@39 -- # grep 0x1 00:12:15.326 16:23:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:15.326 16:23:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:15.326 16:23:24 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:15.326 16:23:24 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.326 16:23:24 -- common/autotest_common.sh@641 -- # es=1 00:12:15.326 16:23:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:15.326 16:23:24 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:15.326 16:23:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:15.326 16:23:24 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:15.326 16:23:24 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:15.326 16:23:24 -- target/ns_masking.sh@39 -- # grep 0x2 00:12:15.326 [ 0]:0x2 00:12:15.326 16:23:24 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:15.326 16:23:24 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:15.326 16:23:24 -- target/ns_masking.sh@40 -- # nguid=467354fff84840db882b129387886d02 00:12:15.326 16:23:24 -- target/ns_masking.sh@41 -- # [[ 467354fff84840db882b129387886d02 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.326 16:23:24 -- target/ns_masking.sh@108 -- # disconnect 00:12:15.326 16:23:24 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.264 16:23:25 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.524 16:23:25 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:16.524 16:23:25 -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:16.524 16:23:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:16.524 16:23:25 -- nvmf/common.sh@117 -- # sync 00:12:16.524 16:23:25 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:16.524 16:23:25 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:16.524 16:23:25 -- nvmf/common.sh@120 -- # set +e 00:12:16.524 16:23:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.524 16:23:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:16.524 rmmod nvme_rdma 00:12:16.524 rmmod nvme_fabrics 00:12:16.524 16:23:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.524 16:23:25 -- nvmf/common.sh@124 -- # set -e 00:12:16.524 16:23:25 -- nvmf/common.sh@125 -- # return 0 00:12:16.524 16:23:25 -- nvmf/common.sh@478 -- # '[' -n 418135 ']' 00:12:16.524 16:23:25 -- nvmf/common.sh@479 -- # killprocess 418135 00:12:16.524 16:23:25 -- common/autotest_common.sh@936 -- # '[' -z 418135 ']' 00:12:16.524 16:23:25 -- common/autotest_common.sh@940 -- # kill -0 418135 00:12:16.524 16:23:25 -- common/autotest_common.sh@941 -- # uname 00:12:16.524 16:23:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.524 16:23:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 418135 00:12:16.524 16:23:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:16.524 16:23:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:16.524 16:23:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 418135' 00:12:16.524 killing process with pid 418135 00:12:16.524 16:23:25 -- common/autotest_common.sh@955 -- # kill 418135 00:12:16.524 16:23:25 -- common/autotest_common.sh@960 -- # wait 418135 00:12:16.524 [2024-04-26 16:23:25.480747] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:16.784 16:23:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:16.784 16:23:25 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:16.784 00:12:16.784 real 0m21.536s 00:12:16.784 user 1m5.400s 00:12:16.784 sys 0m6.109s 00:12:16.784 16:23:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:16.784 16:23:25 -- common/autotest_common.sh@10 -- # set +x 00:12:16.784 ************************************ 00:12:16.784 END TEST nvmf_ns_masking 00:12:16.784 ************************************ 00:12:16.784 16:23:25 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:16.784 16:23:25 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:16.784 16:23:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:16.784 16:23:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.784 16:23:25 -- common/autotest_common.sh@10 -- # set +x 00:12:17.043 ************************************ 00:12:17.043 START TEST nvmf_nvme_cli 00:12:17.043 ************************************ 00:12:17.043 16:23:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:17.043 * Looking for test storage... 00:12:17.043 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:17.043 16:23:26 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.043 16:23:26 -- nvmf/common.sh@7 -- # uname -s 00:12:17.043 16:23:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.043 16:23:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.043 16:23:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.043 16:23:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.043 16:23:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.043 16:23:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.043 16:23:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.043 16:23:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.043 16:23:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.043 16:23:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.043 16:23:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:12:17.044 16:23:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:12:17.044 16:23:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.044 16:23:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.044 16:23:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.044 16:23:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.044 16:23:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:17.044 16:23:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.044 16:23:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.044 16:23:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.044 16:23:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.044 16:23:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.044 16:23:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.044 16:23:26 -- paths/export.sh@5 -- # export PATH 00:12:17.044 16:23:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.044 16:23:26 -- nvmf/common.sh@47 -- # : 0 00:12:17.044 16:23:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:17.044 16:23:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:17.044 16:23:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.044 16:23:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.044 16:23:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.044 16:23:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:17.044 16:23:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:17.044 16:23:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:17.044 16:23:26 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:17.044 16:23:26 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:17.044 16:23:26 -- target/nvme_cli.sh@14 -- # devs=() 00:12:17.044 16:23:26 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:17.044 16:23:26 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:17.044 16:23:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.044 16:23:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:17.044 16:23:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:17.044 16:23:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:17.044 16:23:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.044 16:23:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:17.044 16:23:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.304 16:23:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:17.304 16:23:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:17.304 16:23:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:17.304 16:23:26 -- common/autotest_common.sh@10 -- # set +x 00:12:23.872 16:23:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:23.872 16:23:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:23.872 16:23:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:23.872 16:23:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:23.872 16:23:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:23.872 16:23:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:23.872 16:23:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:23.872 16:23:31 -- nvmf/common.sh@295 -- # net_devs=() 00:12:23.872 16:23:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:23.872 16:23:31 -- nvmf/common.sh@296 -- # e810=() 00:12:23.872 16:23:31 -- nvmf/common.sh@296 -- # local -ga e810 00:12:23.873 16:23:31 -- nvmf/common.sh@297 -- # x722=() 00:12:23.873 16:23:31 -- nvmf/common.sh@297 -- # local -ga x722 00:12:23.873 16:23:31 -- nvmf/common.sh@298 -- # mlx=() 00:12:23.873 16:23:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:23.873 16:23:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.873 16:23:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:23.873 16:23:31 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:23.873 16:23:31 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:23.873 16:23:31 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:23.873 16:23:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:23.873 16:23:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.873 16:23:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:12:23.873 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:12:23.873 16:23:31 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:23.873 16:23:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:23.873 16:23:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:12:23.873 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:12:23.873 16:23:31 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:23.873 16:23:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:23.873 16:23:31 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.873 16:23:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.873 16:23:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:23.873 16:23:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.873 16:23:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:23.873 Found net devices under 0000:18:00.0: mlx_0_0 00:12:23.873 16:23:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.873 16:23:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:23.873 16:23:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.873 16:23:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:23.873 16:23:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.873 16:23:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:23.873 Found net devices under 0000:18:00.1: mlx_0_1 00:12:23.873 16:23:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.873 16:23:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:23.873 16:23:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:23.873 16:23:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:23.873 16:23:31 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:23.873 16:23:31 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:23.873 16:23:31 -- nvmf/common.sh@58 -- # uname 00:12:23.873 16:23:31 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:23.873 16:23:31 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:23.873 16:23:31 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:23.873 16:23:31 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:23.873 16:23:31 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:23.873 16:23:31 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:23.873 16:23:31 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:23.873 16:23:31 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:23.873 16:23:31 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:23.873 16:23:31 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:23.873 16:23:31 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:23.873 16:23:31 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:23.873 16:23:31 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:23.873 16:23:31 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:23.873 16:23:31 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:23.873 16:23:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:23.873 16:23:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:23.873 16:23:32 -- nvmf/common.sh@105 -- # continue 2 00:12:23.873 16:23:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:23.873 16:23:32 -- nvmf/common.sh@105 -- # continue 2 00:12:23.873 16:23:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:23.873 16:23:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:23.873 16:23:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.873 16:23:32 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:23.873 16:23:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:23.873 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:23.873 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:12:23.873 altname enp24s0f0np0 00:12:23.873 altname ens785f0np0 00:12:23.873 inet 192.168.100.8/24 scope global mlx_0_0 00:12:23.873 valid_lft forever preferred_lft forever 00:12:23.873 16:23:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:23.873 16:23:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:23.873 16:23:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.873 16:23:32 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:23.873 16:23:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:23.873 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:23.873 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:12:23.873 altname enp24s0f1np1 00:12:23.873 altname ens785f1np1 00:12:23.873 inet 192.168.100.9/24 scope global mlx_0_1 00:12:23.873 valid_lft forever preferred_lft forever 00:12:23.873 16:23:32 -- nvmf/common.sh@411 -- # return 0 00:12:23.873 16:23:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:23.873 16:23:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:23.873 16:23:32 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:23.873 16:23:32 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:23.873 16:23:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:23.873 16:23:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:23.873 16:23:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:23.873 16:23:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:23.873 16:23:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:23.873 16:23:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:23.873 16:23:32 -- nvmf/common.sh@105 -- # continue 2 00:12:23.873 16:23:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:23.873 16:23:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:23.873 16:23:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:23.873 16:23:32 -- nvmf/common.sh@105 -- # continue 2 00:12:23.873 16:23:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:23.873 16:23:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:23.873 16:23:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.873 16:23:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:23.873 16:23:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:23.873 16:23:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:23.873 16:23:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:23.873 16:23:32 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:23.873 192.168.100.9' 00:12:23.873 16:23:32 -- nvmf/common.sh@446 -- # head -n 1 00:12:23.873 16:23:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:23.874 192.168.100.9' 00:12:23.874 16:23:32 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:23.874 16:23:32 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:23.874 192.168.100.9' 00:12:23.874 16:23:32 -- nvmf/common.sh@447 -- # tail -n +2 00:12:23.874 16:23:32 -- nvmf/common.sh@447 -- # head -n 1 00:12:23.874 16:23:32 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:23.874 16:23:32 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:23.874 16:23:32 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:23.874 16:23:32 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:23.874 16:23:32 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:23.874 16:23:32 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:23.874 16:23:32 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:23.874 16:23:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:23.874 16:23:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:23.874 16:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:23.874 16:23:32 -- nvmf/common.sh@470 -- # nvmfpid=423231 00:12:23.874 16:23:32 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.874 16:23:32 -- nvmf/common.sh@471 -- # waitforlisten 423231 00:12:23.874 16:23:32 -- common/autotest_common.sh@817 -- # '[' -z 423231 ']' 00:12:23.874 16:23:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.874 16:23:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:23.874 16:23:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.874 16:23:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:23.874 16:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:23.874 [2024-04-26 16:23:32.217062] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:12:23.874 [2024-04-26 16:23:32.217117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.874 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.874 [2024-04-26 16:23:32.290126] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.874 [2024-04-26 16:23:32.372179] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.874 [2024-04-26 16:23:32.372223] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.874 [2024-04-26 16:23:32.372233] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.874 [2024-04-26 16:23:32.372241] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.874 [2024-04-26 16:23:32.372264] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.874 [2024-04-26 16:23:32.372318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.874 [2024-04-26 16:23:32.372407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.874 [2024-04-26 16:23:32.372430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.874 [2024-04-26 16:23:32.372432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.133 16:23:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:24.133 16:23:33 -- common/autotest_common.sh@850 -- # return 0 00:12:24.133 16:23:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:24.133 16:23:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:24.133 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.133 16:23:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.133 16:23:33 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:24.133 16:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.133 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.133 [2024-04-26 16:23:33.108069] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1af6310/0x1afa800) succeed. 00:12:24.133 [2024-04-26 16:23:33.118350] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1af7950/0x1b3be90) succeed. 00:12:24.393 16:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.393 16:23:33 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:24.393 16:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.393 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.393 Malloc0 00:12:24.393 16:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.393 16:23:33 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:24.393 16:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.393 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.393 Malloc1 00:12:24.393 16:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.393 16:23:33 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:24.393 16:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.393 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.393 16:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.393 16:23:33 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:24.393 16:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.393 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.393 16:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.393 16:23:33 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:24.393 16:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.393 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.393 16:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.393 16:23:33 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:24.393 16:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.393 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.393 [2024-04-26 16:23:33.320396] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:24.394 16:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.394 16:23:33 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:24.394 16:23:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.394 16:23:33 -- common/autotest_common.sh@10 -- # set +x 00:12:24.394 16:23:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.394 16:23:33 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 4420 00:12:24.652 00:12:24.652 Discovery Log Number of Records 2, Generation counter 2 00:12:24.652 =====Discovery Log Entry 0====== 00:12:24.652 trtype: rdma 00:12:24.652 adrfam: ipv4 00:12:24.652 subtype: current discovery subsystem 00:12:24.652 treq: not required 00:12:24.652 portid: 0 00:12:24.652 trsvcid: 4420 00:12:24.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:24.652 traddr: 192.168.100.8 00:12:24.652 eflags: explicit discovery connections, duplicate discovery information 00:12:24.652 rdma_prtype: not specified 00:12:24.652 rdma_qptype: connected 00:12:24.652 rdma_cms: rdma-cm 00:12:24.652 rdma_pkey: 0x0000 00:12:24.652 =====Discovery Log Entry 1====== 00:12:24.652 trtype: rdma 00:12:24.652 adrfam: ipv4 00:12:24.652 subtype: nvme subsystem 00:12:24.652 treq: not required 00:12:24.652 portid: 0 00:12:24.652 trsvcid: 4420 00:12:24.652 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:24.652 traddr: 192.168.100.8 00:12:24.652 eflags: none 00:12:24.652 rdma_prtype: not specified 00:12:24.652 rdma_qptype: connected 00:12:24.652 rdma_cms: rdma-cm 00:12:24.652 rdma_pkey: 0x0000 00:12:24.652 16:23:33 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:24.652 16:23:33 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:24.652 16:23:33 -- nvmf/common.sh@511 -- # local dev _ 00:12:24.652 16:23:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:24.652 16:23:33 -- nvmf/common.sh@510 -- # nvme list 00:12:24.652 16:23:33 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:24.652 16:23:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:24.652 16:23:33 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:24.652 16:23:33 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:24.652 16:23:33 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:24.652 16:23:33 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:26.031 16:23:35 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:26.031 16:23:35 -- common/autotest_common.sh@1184 -- # local i=0 00:12:26.031 16:23:35 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.031 16:23:35 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:26.031 16:23:35 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:26.031 16:23:35 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:28.567 16:23:37 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:28.567 16:23:37 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:28.567 16:23:37 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.567 16:23:37 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:28.567 16:23:37 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.567 16:23:37 -- common/autotest_common.sh@1194 -- # return 0 00:12:28.567 16:23:37 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:28.567 16:23:37 -- nvmf/common.sh@511 -- # local dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@510 -- # nvme list 00:12:28.567 16:23:37 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:28.567 16:23:37 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:28.567 16:23:37 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:28.567 /dev/nvme0n1 ]] 00:12:28.567 16:23:37 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:28.567 16:23:37 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:28.567 16:23:37 -- nvmf/common.sh@511 -- # local dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@510 -- # nvme list 00:12:28.567 16:23:37 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:28.567 16:23:37 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:28.567 16:23:37 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:28.567 16:23:37 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:28.567 16:23:37 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:28.567 16:23:37 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.857 16:23:40 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.857 16:23:40 -- common/autotest_common.sh@1205 -- # local i=0 00:12:31.857 16:23:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:31.857 16:23:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.857 16:23:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:31.857 16:23:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.857 16:23:40 -- common/autotest_common.sh@1217 -- # return 0 00:12:31.857 16:23:40 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:31.857 16:23:40 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.857 16:23:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:31.857 16:23:40 -- common/autotest_common.sh@10 -- # set +x 00:12:31.857 16:23:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:31.857 16:23:40 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:31.857 16:23:40 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:31.857 16:23:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:31.857 16:23:40 -- nvmf/common.sh@117 -- # sync 00:12:31.857 16:23:40 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:31.857 16:23:40 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:31.857 16:23:40 -- nvmf/common.sh@120 -- # set +e 00:12:31.857 16:23:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.857 16:23:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:31.857 rmmod nvme_rdma 00:12:31.857 rmmod nvme_fabrics 00:12:31.857 16:23:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.857 16:23:40 -- nvmf/common.sh@124 -- # set -e 00:12:31.857 16:23:40 -- nvmf/common.sh@125 -- # return 0 00:12:31.857 16:23:40 -- nvmf/common.sh@478 -- # '[' -n 423231 ']' 00:12:31.857 16:23:40 -- nvmf/common.sh@479 -- # killprocess 423231 00:12:31.857 16:23:40 -- common/autotest_common.sh@936 -- # '[' -z 423231 ']' 00:12:31.857 16:23:40 -- common/autotest_common.sh@940 -- # kill -0 423231 00:12:31.857 16:23:40 -- common/autotest_common.sh@941 -- # uname 00:12:31.857 16:23:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:31.857 16:23:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 423231 00:12:31.857 16:23:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:31.857 16:23:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:31.857 16:23:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 423231' 00:12:31.857 killing process with pid 423231 00:12:31.857 16:23:40 -- common/autotest_common.sh@955 -- # kill 423231 00:12:31.857 16:23:40 -- common/autotest_common.sh@960 -- # wait 423231 00:12:31.858 [2024-04-26 16:23:40.583181] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:31.858 16:23:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:31.858 16:23:40 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:31.858 00:12:31.858 real 0m14.897s 00:12:31.858 user 0m35.300s 00:12:31.858 sys 0m5.417s 00:12:31.858 16:23:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.858 16:23:40 -- common/autotest_common.sh@10 -- # set +x 00:12:31.858 ************************************ 00:12:31.858 END TEST nvmf_nvme_cli 00:12:31.858 ************************************ 00:12:31.858 16:23:40 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:12:31.858 16:23:40 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:31.858 16:23:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:31.858 16:23:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.858 16:23:40 -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 ************************************ 00:12:32.117 START TEST nvmf_host_management 00:12:32.117 ************************************ 00:12:32.117 16:23:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:12:32.376 * Looking for test storage... 00:12:32.376 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:32.376 16:23:41 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.376 16:23:41 -- nvmf/common.sh@7 -- # uname -s 00:12:32.376 16:23:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.376 16:23:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.376 16:23:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.376 16:23:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.376 16:23:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.376 16:23:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.377 16:23:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.377 16:23:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.377 16:23:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.377 16:23:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.377 16:23:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:12:32.377 16:23:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:12:32.377 16:23:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.377 16:23:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.377 16:23:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.377 16:23:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.377 16:23:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:32.377 16:23:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.377 16:23:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.377 16:23:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.377 16:23:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.377 16:23:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.377 16:23:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.377 16:23:41 -- paths/export.sh@5 -- # export PATH 00:12:32.377 16:23:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.377 16:23:41 -- nvmf/common.sh@47 -- # : 0 00:12:32.377 16:23:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.377 16:23:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.377 16:23:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.377 16:23:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.377 16:23:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.377 16:23:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.377 16:23:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.377 16:23:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.377 16:23:41 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.377 16:23:41 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.377 16:23:41 -- target/host_management.sh@105 -- # nvmftestinit 00:12:32.377 16:23:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:32.377 16:23:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.377 16:23:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:32.377 16:23:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:32.377 16:23:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:32.377 16:23:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.377 16:23:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.377 16:23:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.377 16:23:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:32.377 16:23:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:32.377 16:23:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:32.377 16:23:41 -- common/autotest_common.sh@10 -- # set +x 00:12:38.947 16:23:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:38.947 16:23:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.947 16:23:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.947 16:23:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.947 16:23:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.947 16:23:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.947 16:23:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.947 16:23:47 -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.947 16:23:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.947 16:23:47 -- nvmf/common.sh@296 -- # e810=() 00:12:38.947 16:23:47 -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.947 16:23:47 -- nvmf/common.sh@297 -- # x722=() 00:12:38.947 16:23:47 -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.947 16:23:47 -- nvmf/common.sh@298 -- # mlx=() 00:12:38.947 16:23:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.947 16:23:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.947 16:23:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.947 16:23:47 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:38.947 16:23:47 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:38.947 16:23:47 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:38.947 16:23:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.947 16:23:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.947 16:23:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:12:38.947 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:12:38.947 16:23:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:38.947 16:23:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.947 16:23:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:12:38.947 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:12:38.947 16:23:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:38.947 16:23:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.947 16:23:47 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:38.947 16:23:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.947 16:23:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.947 16:23:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.948 16:23:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.948 16:23:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:38.948 Found net devices under 0000:18:00.0: mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.948 16:23:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.948 16:23:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:38.948 16:23:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.948 16:23:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:38.948 Found net devices under 0000:18:00.1: mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.948 16:23:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:38.948 16:23:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:38.948 16:23:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:38.948 16:23:47 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:38.948 16:23:47 -- nvmf/common.sh@58 -- # uname 00:12:38.948 16:23:47 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:38.948 16:23:47 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:38.948 16:23:47 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:38.948 16:23:47 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:38.948 16:23:47 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:38.948 16:23:47 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:38.948 16:23:47 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:38.948 16:23:47 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:38.948 16:23:47 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:38.948 16:23:47 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:38.948 16:23:47 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:38.948 16:23:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:38.948 16:23:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:38.948 16:23:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:38.948 16:23:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:38.948 16:23:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:38.948 16:23:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@105 -- # continue 2 00:12:38.948 16:23:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@105 -- # continue 2 00:12:38.948 16:23:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:38.948 16:23:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.948 16:23:47 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:38.948 16:23:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:38.948 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:38.948 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:12:38.948 altname enp24s0f0np0 00:12:38.948 altname ens785f0np0 00:12:38.948 inet 192.168.100.8/24 scope global mlx_0_0 00:12:38.948 valid_lft forever preferred_lft forever 00:12:38.948 16:23:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:38.948 16:23:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.948 16:23:47 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:38.948 16:23:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:38.948 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:38.948 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:12:38.948 altname enp24s0f1np1 00:12:38.948 altname ens785f1np1 00:12:38.948 inet 192.168.100.9/24 scope global mlx_0_1 00:12:38.948 valid_lft forever preferred_lft forever 00:12:38.948 16:23:47 -- nvmf/common.sh@411 -- # return 0 00:12:38.948 16:23:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:38.948 16:23:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:38.948 16:23:47 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:38.948 16:23:47 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:38.948 16:23:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:38.948 16:23:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:38.948 16:23:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:38.948 16:23:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:38.948 16:23:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:38.948 16:23:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@105 -- # continue 2 00:12:38.948 16:23:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.948 16:23:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:38.948 16:23:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@105 -- # continue 2 00:12:38.948 16:23:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:38.948 16:23:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.948 16:23:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:38.948 16:23:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.948 16:23:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.948 16:23:47 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:38.948 192.168.100.9' 00:12:38.948 16:23:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:38.948 192.168.100.9' 00:12:38.948 16:23:47 -- nvmf/common.sh@446 -- # head -n 1 00:12:38.948 16:23:47 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:39.208 16:23:47 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:39.208 192.168.100.9' 00:12:39.208 16:23:47 -- nvmf/common.sh@447 -- # tail -n +2 00:12:39.208 16:23:47 -- nvmf/common.sh@447 -- # head -n 1 00:12:39.208 16:23:47 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:39.208 16:23:47 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:39.208 16:23:47 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:39.208 16:23:47 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:39.208 16:23:47 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:39.208 16:23:47 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:39.208 16:23:48 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:12:39.208 16:23:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:39.208 16:23:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.208 16:23:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.208 ************************************ 00:12:39.208 START TEST nvmf_host_management 00:12:39.208 ************************************ 00:12:39.208 16:23:48 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:12:39.208 16:23:48 -- target/host_management.sh@69 -- # starttarget 00:12:39.208 16:23:48 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:39.208 16:23:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:39.208 16:23:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:39.208 16:23:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.208 16:23:48 -- nvmf/common.sh@470 -- # nvmfpid=427352 00:12:39.208 16:23:48 -- nvmf/common.sh@471 -- # waitforlisten 427352 00:12:39.208 16:23:48 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:39.208 16:23:48 -- common/autotest_common.sh@817 -- # '[' -z 427352 ']' 00:12:39.208 16:23:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.208 16:23:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:39.208 16:23:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.208 16:23:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:39.208 16:23:48 -- common/autotest_common.sh@10 -- # set +x 00:12:39.467 [2024-04-26 16:23:48.240908] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:12:39.467 [2024-04-26 16:23:48.240962] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.467 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.467 [2024-04-26 16:23:48.313547] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.467 [2024-04-26 16:23:48.389272] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.467 [2024-04-26 16:23:48.389318] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.467 [2024-04-26 16:23:48.389328] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.467 [2024-04-26 16:23:48.389336] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.467 [2024-04-26 16:23:48.389344] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.467 [2024-04-26 16:23:48.389405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.467 [2024-04-26 16:23:48.389483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.467 [2024-04-26 16:23:48.389582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.467 [2024-04-26 16:23:48.389583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:40.034 16:23:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:40.034 16:23:49 -- common/autotest_common.sh@850 -- # return 0 00:12:40.034 16:23:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:40.034 16:23:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:40.034 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.293 16:23:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.293 16:23:49 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:40.293 16:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.293 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.293 [2024-04-26 16:23:49.129686] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a3f600/0x1a43af0) succeed. 00:12:40.293 [2024-04-26 16:23:49.139966] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a40c40/0x1a85180) succeed. 00:12:40.293 16:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.293 16:23:49 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:40.293 16:23:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:40.293 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.293 16:23:49 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:40.293 16:23:49 -- target/host_management.sh@23 -- # cat 00:12:40.293 16:23:49 -- target/host_management.sh@30 -- # rpc_cmd 00:12:40.293 16:23:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:40.293 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.293 Malloc0 00:12:40.553 [2024-04-26 16:23:49.326805] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:40.553 16:23:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:40.553 16:23:49 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:40.553 16:23:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:40.553 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.553 16:23:49 -- target/host_management.sh@73 -- # perfpid=427559 00:12:40.553 16:23:49 -- target/host_management.sh@74 -- # waitforlisten 427559 /var/tmp/bdevperf.sock 00:12:40.553 16:23:49 -- common/autotest_common.sh@817 -- # '[' -z 427559 ']' 00:12:40.553 16:23:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:40.553 16:23:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:40.553 16:23:49 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:40.553 16:23:49 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:40.553 16:23:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:40.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:40.553 16:23:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:40.553 16:23:49 -- nvmf/common.sh@521 -- # config=() 00:12:40.553 16:23:49 -- common/autotest_common.sh@10 -- # set +x 00:12:40.553 16:23:49 -- nvmf/common.sh@521 -- # local subsystem config 00:12:40.553 16:23:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:40.553 16:23:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:40.553 { 00:12:40.553 "params": { 00:12:40.553 "name": "Nvme$subsystem", 00:12:40.553 "trtype": "$TEST_TRANSPORT", 00:12:40.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:40.553 "adrfam": "ipv4", 00:12:40.553 "trsvcid": "$NVMF_PORT", 00:12:40.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:40.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:40.553 "hdgst": ${hdgst:-false}, 00:12:40.553 "ddgst": ${ddgst:-false} 00:12:40.553 }, 00:12:40.553 "method": "bdev_nvme_attach_controller" 00:12:40.553 } 00:12:40.553 EOF 00:12:40.553 )") 00:12:40.553 16:23:49 -- nvmf/common.sh@543 -- # cat 00:12:40.553 16:23:49 -- nvmf/common.sh@545 -- # jq . 00:12:40.553 16:23:49 -- nvmf/common.sh@546 -- # IFS=, 00:12:40.553 16:23:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:40.553 "params": { 00:12:40.553 "name": "Nvme0", 00:12:40.553 "trtype": "rdma", 00:12:40.553 "traddr": "192.168.100.8", 00:12:40.553 "adrfam": "ipv4", 00:12:40.553 "trsvcid": "4420", 00:12:40.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:40.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:40.553 "hdgst": false, 00:12:40.553 "ddgst": false 00:12:40.553 }, 00:12:40.553 "method": "bdev_nvme_attach_controller" 00:12:40.553 }' 00:12:40.553 [2024-04-26 16:23:49.429680] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:12:40.553 [2024-04-26 16:23:49.429734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427559 ] 00:12:40.553 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.553 [2024-04-26 16:23:49.502128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.812 [2024-04-26 16:23:49.579956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.812 Running I/O for 10 seconds... 00:12:41.382 16:23:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:41.383 16:23:50 -- common/autotest_common.sh@850 -- # return 0 00:12:41.383 16:23:50 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:41.383 16:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.383 16:23:50 -- common/autotest_common.sh@10 -- # set +x 00:12:41.383 16:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.383 16:23:50 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:41.383 16:23:50 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:41.383 16:23:50 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:41.383 16:23:50 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:41.383 16:23:50 -- target/host_management.sh@52 -- # local ret=1 00:12:41.383 16:23:50 -- target/host_management.sh@53 -- # local i 00:12:41.383 16:23:50 -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:41.383 16:23:50 -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:41.383 16:23:50 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:41.383 16:23:50 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:41.383 16:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.383 16:23:50 -- common/autotest_common.sh@10 -- # set +x 00:12:41.383 16:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.383 16:23:50 -- target/host_management.sh@55 -- # read_io_count=1519 00:12:41.383 16:23:50 -- target/host_management.sh@58 -- # '[' 1519 -ge 100 ']' 00:12:41.383 16:23:50 -- target/host_management.sh@59 -- # ret=0 00:12:41.383 16:23:50 -- target/host_management.sh@60 -- # break 00:12:41.383 16:23:50 -- target/host_management.sh@64 -- # return 0 00:12:41.383 16:23:50 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:41.383 16:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.383 16:23:50 -- common/autotest_common.sh@10 -- # set +x 00:12:41.383 16:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.383 16:23:50 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:41.383 16:23:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.383 16:23:50 -- common/autotest_common.sh@10 -- # set +x 00:12:41.383 16:23:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.383 16:23:50 -- target/host_management.sh@87 -- # sleep 1 00:12:42.321 [2024-04-26 16:23:51.327621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x181500 00:12:42.321 [2024-04-26 16:23:51.327946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.327968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.327988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.327999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x181800 00:12:42.321 [2024-04-26 16:23:51.328237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x181700 00:12:42.321 [2024-04-26 16:23:51.328559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.321 [2024-04-26 16:23:51.328570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x181600 00:12:42.322 [2024-04-26 16:23:51.328579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d419000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d43a000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d45b000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d47c000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49d000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d0e0000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2b000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce4c000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8de000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d8bd000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d89c000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d87b000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d794000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d773000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d752000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.328977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d731000 len:0x10000 key:0x181400 00:12:42.322 [2024-04-26 16:23:51.328986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:04c0 p:0 m:0 dnr:0 00:12:42.322 [2024-04-26 16:23:51.330295] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:12:42.322 [2024-04-26 16:23:51.331206] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:42.322 task offset: 81920 on job bdev=Nvme0n1 fails 00:12:42.322 00:12:42.322 Latency(us) 00:12:42.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.322 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:42.322 Job: Nvme0n1 ended in about 1.57 seconds with error 00:12:42.322 Verification LBA range: start 0x0 length 0x400 00:12:42.322 Nvme0n1 : 1.57 1047.67 65.48 40.79 0.00 58253.44 2051.56 1021221.84 00:12:42.322 =================================================================================================================== 00:12:42.322 Total : 1047.67 65.48 40.79 0.00 58253.44 2051.56 1021221.84 00:12:42.322 [2024-04-26 16:23:51.333113] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:42.322 16:23:51 -- target/host_management.sh@91 -- # kill -9 427559 00:12:42.322 16:23:51 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:42.581 16:23:51 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:42.581 16:23:51 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:42.581 16:23:51 -- nvmf/common.sh@521 -- # config=() 00:12:42.581 16:23:51 -- nvmf/common.sh@521 -- # local subsystem config 00:12:42.581 16:23:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:12:42.581 16:23:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:12:42.581 { 00:12:42.581 "params": { 00:12:42.581 "name": "Nvme$subsystem", 00:12:42.581 "trtype": "$TEST_TRANSPORT", 00:12:42.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:42.581 "adrfam": "ipv4", 00:12:42.581 "trsvcid": "$NVMF_PORT", 00:12:42.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:42.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:42.581 "hdgst": ${hdgst:-false}, 00:12:42.581 "ddgst": ${ddgst:-false} 00:12:42.581 }, 00:12:42.581 "method": "bdev_nvme_attach_controller" 00:12:42.581 } 00:12:42.581 EOF 00:12:42.581 )") 00:12:42.581 16:23:51 -- nvmf/common.sh@543 -- # cat 00:12:42.581 16:23:51 -- nvmf/common.sh@545 -- # jq . 00:12:42.581 16:23:51 -- nvmf/common.sh@546 -- # IFS=, 00:12:42.581 16:23:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:12:42.581 "params": { 00:12:42.581 "name": "Nvme0", 00:12:42.581 "trtype": "rdma", 00:12:42.581 "traddr": "192.168.100.8", 00:12:42.581 "adrfam": "ipv4", 00:12:42.581 "trsvcid": "4420", 00:12:42.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:42.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:42.581 "hdgst": false, 00:12:42.581 "ddgst": false 00:12:42.581 }, 00:12:42.581 "method": "bdev_nvme_attach_controller" 00:12:42.581 }' 00:12:42.581 [2024-04-26 16:23:51.388977] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:12:42.581 [2024-04-26 16:23:51.389033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427927 ] 00:12:42.581 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.581 [2024-04-26 16:23:51.460909] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.581 [2024-04-26 16:23:51.541469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.841 Running I/O for 1 seconds... 00:12:43.779 00:12:43.779 Latency(us) 00:12:43.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.779 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:43.779 Verification LBA range: start 0x0 length 0x400 00:12:43.779 Nvme0n1 : 1.01 3067.98 191.75 0.00 0.00 20438.33 690.98 27354.16 00:12:43.779 =================================================================================================================== 00:12:43.779 Total : 3067.98 191.75 0.00 0.00 20438.33 690.98 27354.16 00:12:44.038 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 427559 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:12:44.038 16:23:52 -- target/host_management.sh@102 -- # stoptarget 00:12:44.038 16:23:52 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:44.038 16:23:52 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:44.038 16:23:52 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:44.038 16:23:52 -- target/host_management.sh@40 -- # nvmftestfini 00:12:44.038 16:23:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:44.038 16:23:52 -- nvmf/common.sh@117 -- # sync 00:12:44.038 16:23:52 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:44.038 16:23:53 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:44.038 16:23:53 -- nvmf/common.sh@120 -- # set +e 00:12:44.038 16:23:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.038 16:23:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:44.038 rmmod nvme_rdma 00:12:44.038 rmmod nvme_fabrics 00:12:44.038 16:23:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.038 16:23:53 -- nvmf/common.sh@124 -- # set -e 00:12:44.038 16:23:53 -- nvmf/common.sh@125 -- # return 0 00:12:44.038 16:23:53 -- nvmf/common.sh@478 -- # '[' -n 427352 ']' 00:12:44.038 16:23:53 -- nvmf/common.sh@479 -- # killprocess 427352 00:12:44.038 16:23:53 -- common/autotest_common.sh@936 -- # '[' -z 427352 ']' 00:12:44.038 16:23:53 -- common/autotest_common.sh@940 -- # kill -0 427352 00:12:44.038 16:23:53 -- common/autotest_common.sh@941 -- # uname 00:12:44.038 16:23:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:44.038 16:23:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 427352 00:12:44.297 16:23:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:44.297 16:23:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:44.297 16:23:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 427352' 00:12:44.297 killing process with pid 427352 00:12:44.297 16:23:53 -- common/autotest_common.sh@955 -- # kill 427352 00:12:44.297 16:23:53 -- common/autotest_common.sh@960 -- # wait 427352 00:12:44.297 [2024-04-26 16:23:53.186240] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:12:44.556 [2024-04-26 16:23:53.408021] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:44.556 16:23:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:44.556 16:23:53 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:12:44.556 00:12:44.556 real 0m5.251s 00:12:44.556 user 0m23.270s 00:12:44.556 sys 0m1.142s 00:12:44.556 16:23:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.556 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:12:44.556 ************************************ 00:12:44.556 END TEST nvmf_host_management 00:12:44.556 ************************************ 00:12:44.556 16:23:53 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:44.556 00:12:44.556 real 0m12.439s 00:12:44.556 user 0m25.323s 00:12:44.556 sys 0m6.481s 00:12:44.556 16:23:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:44.556 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:12:44.556 ************************************ 00:12:44.556 END TEST nvmf_host_management 00:12:44.556 ************************************ 00:12:44.556 16:23:53 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:44.556 16:23:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:44.556 16:23:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:44.556 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:12:44.814 ************************************ 00:12:44.814 START TEST nvmf_lvol 00:12:44.814 ************************************ 00:12:44.814 16:23:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:44.814 * Looking for test storage... 00:12:44.814 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:44.814 16:23:53 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.814 16:23:53 -- nvmf/common.sh@7 -- # uname -s 00:12:44.814 16:23:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.814 16:23:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.814 16:23:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.814 16:23:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.814 16:23:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.814 16:23:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.814 16:23:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.814 16:23:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.814 16:23:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.814 16:23:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.814 16:23:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:12:44.814 16:23:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:12:44.814 16:23:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.814 16:23:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.814 16:23:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.814 16:23:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.814 16:23:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:44.814 16:23:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.814 16:23:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.814 16:23:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.814 16:23:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.814 16:23:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.814 16:23:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.814 16:23:53 -- paths/export.sh@5 -- # export PATH 00:12:44.814 16:23:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.814 16:23:53 -- nvmf/common.sh@47 -- # : 0 00:12:45.072 16:23:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.072 16:23:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.072 16:23:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.072 16:23:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.072 16:23:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.072 16:23:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.072 16:23:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.072 16:23:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.072 16:23:53 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.072 16:23:53 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.072 16:23:53 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:45.072 16:23:53 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:45.072 16:23:53 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:45.072 16:23:53 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:45.072 16:23:53 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:12:45.072 16:23:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.072 16:23:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:45.072 16:23:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:45.072 16:23:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:45.072 16:23:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.072 16:23:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.072 16:23:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.072 16:23:53 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:12:45.072 16:23:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:45.072 16:23:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.072 16:23:53 -- common/autotest_common.sh@10 -- # set +x 00:12:50.339 16:23:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:50.339 16:23:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:50.339 16:23:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:50.339 16:23:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:50.339 16:23:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:50.339 16:23:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:50.339 16:23:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:50.339 16:23:59 -- nvmf/common.sh@295 -- # net_devs=() 00:12:50.339 16:23:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:50.339 16:23:59 -- nvmf/common.sh@296 -- # e810=() 00:12:50.339 16:23:59 -- nvmf/common.sh@296 -- # local -ga e810 00:12:50.339 16:23:59 -- nvmf/common.sh@297 -- # x722=() 00:12:50.339 16:23:59 -- nvmf/common.sh@297 -- # local -ga x722 00:12:50.339 16:23:59 -- nvmf/common.sh@298 -- # mlx=() 00:12:50.339 16:23:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:50.339 16:23:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.339 16:23:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:50.339 16:23:59 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:50.339 16:23:59 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:50.339 16:23:59 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:50.339 16:23:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:50.339 16:23:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.339 16:23:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:12:50.339 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:12:50.339 16:23:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:50.339 16:23:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:50.339 16:23:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:12:50.339 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:12:50.339 16:23:59 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:50.339 16:23:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:50.339 16:23:59 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.339 16:23:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.339 16:23:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:50.339 16:23:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.339 16:23:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:50.339 Found net devices under 0000:18:00.0: mlx_0_0 00:12:50.339 16:23:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.339 16:23:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:50.339 16:23:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.339 16:23:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:50.339 16:23:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.339 16:23:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:50.339 Found net devices under 0000:18:00.1: mlx_0_1 00:12:50.339 16:23:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.339 16:23:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:50.339 16:23:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:50.339 16:23:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:12:50.339 16:23:59 -- nvmf/common.sh@409 -- # rdma_device_init 00:12:50.339 16:23:59 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:12:50.599 16:23:59 -- nvmf/common.sh@58 -- # uname 00:12:50.599 16:23:59 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:50.599 16:23:59 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:50.599 16:23:59 -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:50.599 16:23:59 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:50.599 16:23:59 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:50.599 16:23:59 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:50.599 16:23:59 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:50.599 16:23:59 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:50.599 16:23:59 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:12:50.599 16:23:59 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:50.599 16:23:59 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:50.599 16:23:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:50.599 16:23:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:50.599 16:23:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:50.599 16:23:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:50.599 16:23:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:50.599 16:23:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:50.599 16:23:59 -- nvmf/common.sh@105 -- # continue 2 00:12:50.599 16:23:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:50.599 16:23:59 -- nvmf/common.sh@105 -- # continue 2 00:12:50.599 16:23:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:50.599 16:23:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:50.599 16:23:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:50.599 16:23:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:50.599 16:23:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:50.599 16:23:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:50.599 16:23:59 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:50.599 16:23:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:50.599 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:50.599 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:12:50.599 altname enp24s0f0np0 00:12:50.599 altname ens785f0np0 00:12:50.599 inet 192.168.100.8/24 scope global mlx_0_0 00:12:50.599 valid_lft forever preferred_lft forever 00:12:50.599 16:23:59 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:50.599 16:23:59 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:50.599 16:23:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:50.599 16:23:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:50.599 16:23:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:50.599 16:23:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:50.599 16:23:59 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:50.599 16:23:59 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:50.599 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:50.599 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:12:50.599 altname enp24s0f1np1 00:12:50.599 altname ens785f1np1 00:12:50.599 inet 192.168.100.9/24 scope global mlx_0_1 00:12:50.599 valid_lft forever preferred_lft forever 00:12:50.599 16:23:59 -- nvmf/common.sh@411 -- # return 0 00:12:50.599 16:23:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:50.599 16:23:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:50.599 16:23:59 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:12:50.599 16:23:59 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:50.599 16:23:59 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:50.599 16:23:59 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:50.599 16:23:59 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:50.599 16:23:59 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:50.599 16:23:59 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:50.599 16:23:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:50.599 16:23:59 -- nvmf/common.sh@105 -- # continue 2 00:12:50.599 16:23:59 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:50.599 16:23:59 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:50.599 16:23:59 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:50.599 16:23:59 -- nvmf/common.sh@105 -- # continue 2 00:12:50.599 16:23:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:50.599 16:23:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:50.599 16:23:59 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:50.599 16:23:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:50.599 16:23:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:50.600 16:23:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:50.600 16:23:59 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:50.600 16:23:59 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:50.600 16:23:59 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:50.600 16:23:59 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:50.600 16:23:59 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:50.600 16:23:59 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:50.600 16:23:59 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:12:50.600 192.168.100.9' 00:12:50.600 16:23:59 -- nvmf/common.sh@446 -- # head -n 1 00:12:50.600 16:23:59 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:12:50.600 192.168.100.9' 00:12:50.600 16:23:59 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:50.600 16:23:59 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:12:50.600 192.168.100.9' 00:12:50.600 16:23:59 -- nvmf/common.sh@447 -- # tail -n +2 00:12:50.600 16:23:59 -- nvmf/common.sh@447 -- # head -n 1 00:12:50.600 16:23:59 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:50.600 16:23:59 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:12:50.600 16:23:59 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:50.600 16:23:59 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:12:50.600 16:23:59 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:12:50.600 16:23:59 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:12:50.600 16:23:59 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:50.600 16:23:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:50.600 16:23:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:50.600 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:50.600 16:23:59 -- nvmf/common.sh@470 -- # nvmfpid=431063 00:12:50.600 16:23:59 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:50.600 16:23:59 -- nvmf/common.sh@471 -- # waitforlisten 431063 00:12:50.600 16:23:59 -- common/autotest_common.sh@817 -- # '[' -z 431063 ']' 00:12:50.600 16:23:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.600 16:23:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:50.600 16:23:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.600 16:23:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:50.600 16:23:59 -- common/autotest_common.sh@10 -- # set +x 00:12:50.600 [2024-04-26 16:23:59.614997] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:12:50.600 [2024-04-26 16:23:59.615054] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.859 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.859 [2024-04-26 16:23:59.687622] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:50.859 [2024-04-26 16:23:59.768291] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.859 [2024-04-26 16:23:59.768336] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.859 [2024-04-26 16:23:59.768349] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.859 [2024-04-26 16:23:59.768357] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.859 [2024-04-26 16:23:59.768364] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.859 [2024-04-26 16:23:59.768422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.859 [2024-04-26 16:23:59.768512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.859 [2024-04-26 16:23:59.768515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.425 16:24:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:51.425 16:24:00 -- common/autotest_common.sh@850 -- # return 0 00:12:51.425 16:24:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:51.425 16:24:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:51.425 16:24:00 -- common/autotest_common.sh@10 -- # set +x 00:12:51.684 16:24:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.684 16:24:00 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:51.684 [2024-04-26 16:24:00.657172] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e3d830/0x1e41d20) succeed. 00:12:51.684 [2024-04-26 16:24:00.667360] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e3edd0/0x1e833b0) succeed. 00:12:51.942 16:24:00 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.200 16:24:00 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:52.200 16:24:00 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.200 16:24:01 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:52.200 16:24:01 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:52.459 16:24:01 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:52.717 16:24:01 -- target/nvmf_lvol.sh@29 -- # lvs=0307a6e8-51de-4f2f-b2f1-b7ac41ce1e27 00:12:52.717 16:24:01 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0307a6e8-51de-4f2f-b2f1-b7ac41ce1e27 lvol 20 00:12:52.717 16:24:01 -- target/nvmf_lvol.sh@32 -- # lvol=a17a7164-8c21-44ba-9b46-129d9d3f2c45 00:12:52.717 16:24:01 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:52.977 16:24:01 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a17a7164-8c21-44ba-9b46-129d9d3f2c45 00:12:53.235 16:24:02 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:53.493 [2024-04-26 16:24:02.285679] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:53.493 16:24:02 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:53.493 16:24:02 -- target/nvmf_lvol.sh@42 -- # perf_pid=431589 00:12:53.493 16:24:02 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:53.493 16:24:02 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:53.752 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.686 16:24:03 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a17a7164-8c21-44ba-9b46-129d9d3f2c45 MY_SNAPSHOT 00:12:54.686 16:24:03 -- target/nvmf_lvol.sh@47 -- # snapshot=d50af26c-95df-4c26-bea2-341fb41bb966 00:12:54.686 16:24:03 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a17a7164-8c21-44ba-9b46-129d9d3f2c45 30 00:12:54.946 16:24:03 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d50af26c-95df-4c26-bea2-341fb41bb966 MY_CLONE 00:12:55.204 16:24:04 -- target/nvmf_lvol.sh@49 -- # clone=1ab9159d-2ab9-403e-968c-657b5c30c22f 00:12:55.204 16:24:04 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1ab9159d-2ab9-403e-968c-657b5c30c22f 00:12:55.463 16:24:04 -- target/nvmf_lvol.sh@53 -- # wait 431589 00:13:05.445 Initializing NVMe Controllers 00:13:05.445 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:13:05.445 Controller IO queue size 128, less than required. 00:13:05.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:05.445 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:05.445 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:05.445 Initialization complete. Launching workers. 00:13:05.445 ======================================================== 00:13:05.445 Latency(us) 00:13:05.445 Device Information : IOPS MiB/s Average min max 00:13:05.445 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16770.40 65.51 7634.40 2138.07 37841.36 00:13:05.445 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16684.80 65.17 7672.93 3666.13 47151.86 00:13:05.445 ======================================================== 00:13:05.445 Total : 33455.20 130.68 7653.62 2138.07 47151.86 00:13:05.445 00:13:05.445 16:24:13 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:05.445 16:24:14 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a17a7164-8c21-44ba-9b46-129d9d3f2c45 00:13:05.445 16:24:14 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0307a6e8-51de-4f2f-b2f1-b7ac41ce1e27 00:13:05.445 16:24:14 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:05.445 16:24:14 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:05.445 16:24:14 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:05.445 16:24:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:05.445 16:24:14 -- nvmf/common.sh@117 -- # sync 00:13:05.445 16:24:14 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:05.445 16:24:14 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:05.445 16:24:14 -- nvmf/common.sh@120 -- # set +e 00:13:05.445 16:24:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.445 16:24:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:05.445 rmmod nvme_rdma 00:13:05.445 rmmod nvme_fabrics 00:13:05.704 16:24:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:05.704 16:24:14 -- nvmf/common.sh@124 -- # set -e 00:13:05.704 16:24:14 -- nvmf/common.sh@125 -- # return 0 00:13:05.704 16:24:14 -- nvmf/common.sh@478 -- # '[' -n 431063 ']' 00:13:05.704 16:24:14 -- nvmf/common.sh@479 -- # killprocess 431063 00:13:05.704 16:24:14 -- common/autotest_common.sh@936 -- # '[' -z 431063 ']' 00:13:05.704 16:24:14 -- common/autotest_common.sh@940 -- # kill -0 431063 00:13:05.704 16:24:14 -- common/autotest_common.sh@941 -- # uname 00:13:05.704 16:24:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.704 16:24:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 431063 00:13:05.704 16:24:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:05.704 16:24:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:05.704 16:24:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 431063' 00:13:05.704 killing process with pid 431063 00:13:05.704 16:24:14 -- common/autotest_common.sh@955 -- # kill 431063 00:13:05.704 16:24:14 -- common/autotest_common.sh@960 -- # wait 431063 00:13:05.704 [2024-04-26 16:24:14.612588] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:05.963 16:24:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:05.963 16:24:14 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:05.963 00:13:05.963 real 0m21.155s 00:13:05.963 user 1m11.213s 00:13:05.963 sys 0m5.734s 00:13:05.963 16:24:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:05.963 16:24:14 -- common/autotest_common.sh@10 -- # set +x 00:13:05.963 ************************************ 00:13:05.963 END TEST nvmf_lvol 00:13:05.963 ************************************ 00:13:05.964 16:24:14 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:05.964 16:24:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:05.964 16:24:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:05.964 16:24:14 -- common/autotest_common.sh@10 -- # set +x 00:13:06.223 ************************************ 00:13:06.223 START TEST nvmf_lvs_grow 00:13:06.223 ************************************ 00:13:06.223 16:24:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:13:06.223 * Looking for test storage... 00:13:06.223 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:06.223 16:24:15 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.223 16:24:15 -- nvmf/common.sh@7 -- # uname -s 00:13:06.223 16:24:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.223 16:24:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.223 16:24:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.223 16:24:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.223 16:24:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.223 16:24:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.223 16:24:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.223 16:24:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.223 16:24:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.223 16:24:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.223 16:24:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:06.223 16:24:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:13:06.223 16:24:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.223 16:24:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.223 16:24:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.223 16:24:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:06.223 16:24:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:06.223 16:24:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.223 16:24:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.223 16:24:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.224 16:24:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.224 16:24:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.224 16:24:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.224 16:24:15 -- paths/export.sh@5 -- # export PATH 00:13:06.224 16:24:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.224 16:24:15 -- nvmf/common.sh@47 -- # : 0 00:13:06.224 16:24:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:06.224 16:24:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:06.224 16:24:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:06.224 16:24:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.224 16:24:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.224 16:24:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:06.224 16:24:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:06.224 16:24:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:06.224 16:24:15 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:06.224 16:24:15 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:06.224 16:24:15 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:06.224 16:24:15 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:06.224 16:24:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.224 16:24:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:06.224 16:24:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:06.224 16:24:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:06.224 16:24:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.224 16:24:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.224 16:24:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.224 16:24:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:06.224 16:24:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:06.224 16:24:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:06.224 16:24:15 -- common/autotest_common.sh@10 -- # set +x 00:13:12.792 16:24:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:12.792 16:24:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.792 16:24:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.792 16:24:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.792 16:24:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.792 16:24:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.792 16:24:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.792 16:24:21 -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.792 16:24:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.792 16:24:21 -- nvmf/common.sh@296 -- # e810=() 00:13:12.792 16:24:21 -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.792 16:24:21 -- nvmf/common.sh@297 -- # x722=() 00:13:12.792 16:24:21 -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.792 16:24:21 -- nvmf/common.sh@298 -- # mlx=() 00:13:12.792 16:24:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.792 16:24:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.792 16:24:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.792 16:24:21 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:12.792 16:24:21 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:12.792 16:24:21 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:12.792 16:24:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.792 16:24:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:13:12.792 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:13:12.792 16:24:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:12.792 16:24:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:13:12.792 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:13:12.792 16:24:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:12.792 16:24:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.792 16:24:21 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.792 16:24:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:12.792 16:24:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.792 16:24:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:12.792 Found net devices under 0000:18:00.0: mlx_0_0 00:13:12.792 16:24:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.792 16:24:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.792 16:24:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:12.792 16:24:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.792 16:24:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:12.792 Found net devices under 0000:18:00.1: mlx_0_1 00:13:12.792 16:24:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.792 16:24:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:12.792 16:24:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:12.792 16:24:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:12.792 16:24:21 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:12.792 16:24:21 -- nvmf/common.sh@58 -- # uname 00:13:12.792 16:24:21 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:12.792 16:24:21 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:12.792 16:24:21 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:12.792 16:24:21 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:12.792 16:24:21 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:12.792 16:24:21 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:12.792 16:24:21 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:12.792 16:24:21 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:12.792 16:24:21 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:12.792 16:24:21 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:12.792 16:24:21 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:12.792 16:24:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:12.792 16:24:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:12.792 16:24:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:12.792 16:24:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:12.792 16:24:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:12.792 16:24:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:12.792 16:24:21 -- nvmf/common.sh@105 -- # continue 2 00:13:12.792 16:24:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.792 16:24:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:12.792 16:24:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:12.792 16:24:21 -- nvmf/common.sh@105 -- # continue 2 00:13:12.793 16:24:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:12.793 16:24:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:12.793 16:24:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:12.793 16:24:21 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:12.793 16:24:21 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:12.793 16:24:21 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:12.793 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:12.793 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:13:12.793 altname enp24s0f0np0 00:13:12.793 altname ens785f0np0 00:13:12.793 inet 192.168.100.8/24 scope global mlx_0_0 00:13:12.793 valid_lft forever preferred_lft forever 00:13:12.793 16:24:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:12.793 16:24:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:12.793 16:24:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:12.793 16:24:21 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:12.793 16:24:21 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:12.793 16:24:21 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:12.793 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:12.793 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:13:12.793 altname enp24s0f1np1 00:13:12.793 altname ens785f1np1 00:13:12.793 inet 192.168.100.9/24 scope global mlx_0_1 00:13:12.793 valid_lft forever preferred_lft forever 00:13:12.793 16:24:21 -- nvmf/common.sh@411 -- # return 0 00:13:12.793 16:24:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:12.793 16:24:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:12.793 16:24:21 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:12.793 16:24:21 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:12.793 16:24:21 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:12.793 16:24:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:12.793 16:24:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:12.793 16:24:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:12.793 16:24:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:12.793 16:24:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:12.793 16:24:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:12.793 16:24:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.793 16:24:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:12.793 16:24:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:12.793 16:24:21 -- nvmf/common.sh@105 -- # continue 2 00:13:12.793 16:24:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:12.793 16:24:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.793 16:24:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:12.793 16:24:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:12.793 16:24:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:12.793 16:24:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:12.793 16:24:21 -- nvmf/common.sh@105 -- # continue 2 00:13:12.793 16:24:21 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:12.793 16:24:21 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:12.793 16:24:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:12.793 16:24:21 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:12.793 16:24:21 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:12.793 16:24:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:12.793 16:24:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:12.793 16:24:21 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:12.793 192.168.100.9' 00:13:12.793 16:24:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:12.793 192.168.100.9' 00:13:12.793 16:24:21 -- nvmf/common.sh@446 -- # head -n 1 00:13:12.793 16:24:21 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:12.793 16:24:21 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:12.793 192.168.100.9' 00:13:12.793 16:24:21 -- nvmf/common.sh@447 -- # tail -n +2 00:13:12.793 16:24:21 -- nvmf/common.sh@447 -- # head -n 1 00:13:12.793 16:24:21 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:12.793 16:24:21 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:12.793 16:24:21 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:12.793 16:24:21 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:12.793 16:24:21 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:12.793 16:24:21 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:12.793 16:24:21 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:12.793 16:24:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:12.793 16:24:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:12.793 16:24:21 -- common/autotest_common.sh@10 -- # set +x 00:13:12.793 16:24:21 -- nvmf/common.sh@470 -- # nvmfpid=436477 00:13:12.793 16:24:21 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:12.793 16:24:21 -- nvmf/common.sh@471 -- # waitforlisten 436477 00:13:12.793 16:24:21 -- common/autotest_common.sh@817 -- # '[' -z 436477 ']' 00:13:12.793 16:24:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.793 16:24:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:12.793 16:24:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.793 16:24:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:12.793 16:24:21 -- common/autotest_common.sh@10 -- # set +x 00:13:12.793 [2024-04-26 16:24:21.373966] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:12.793 [2024-04-26 16:24:21.374022] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.793 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.793 [2024-04-26 16:24:21.447732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.793 [2024-04-26 16:24:21.531641] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.793 [2024-04-26 16:24:21.531682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.793 [2024-04-26 16:24:21.531692] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.793 [2024-04-26 16:24:21.531701] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.793 [2024-04-26 16:24:21.531708] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.793 [2024-04-26 16:24:21.531740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.361 16:24:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:13.361 16:24:22 -- common/autotest_common.sh@850 -- # return 0 00:13:13.361 16:24:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:13.361 16:24:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:13.361 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:13:13.361 16:24:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.361 16:24:22 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:13.619 [2024-04-26 16:24:22.396449] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2464170/0x2468660) succeed. 00:13:13.619 [2024-04-26 16:24:22.405014] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2465670/0x24a9cf0) succeed. 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:13.619 16:24:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:13.619 16:24:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.619 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:13:13.619 ************************************ 00:13:13.619 START TEST lvs_grow_clean 00:13:13.619 ************************************ 00:13:13.619 16:24:22 -- common/autotest_common.sh@1111 -- # lvs_grow 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:13.619 16:24:22 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:13.878 16:24:22 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:13.878 16:24:22 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:13.878 16:24:22 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:14.152 16:24:23 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:14.152 16:24:23 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:14.152 16:24:23 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:14.413 16:24:23 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:14.413 16:24:23 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:14.413 16:24:23 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c lvol 150 00:13:14.413 16:24:23 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7df8a137-efa0-4856-a0a3-a4838c673d0d 00:13:14.413 16:24:23 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:14.413 16:24:23 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:14.672 [2024-04-26 16:24:23.566164] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:14.672 [2024-04-26 16:24:23.566216] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:14.672 true 00:13:14.672 16:24:23 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:14.672 16:24:23 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:14.931 16:24:23 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:14.931 16:24:23 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:14.931 16:24:23 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7df8a137-efa0-4856-a0a3-a4838c673d0d 00:13:15.190 16:24:24 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:15.448 [2024-04-26 16:24:24.248375] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:15.448 16:24:24 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:15.448 16:24:24 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=436966 00:13:15.448 16:24:24 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:15.448 16:24:24 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:15.448 16:24:24 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 436966 /var/tmp/bdevperf.sock 00:13:15.448 16:24:24 -- common/autotest_common.sh@817 -- # '[' -z 436966 ']' 00:13:15.448 16:24:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:15.448 16:24:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:15.448 16:24:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:15.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:15.448 16:24:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:15.448 16:24:24 -- common/autotest_common.sh@10 -- # set +x 00:13:15.707 [2024-04-26 16:24:24.480916] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:15.707 [2024-04-26 16:24:24.480976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid436966 ] 00:13:15.707 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.707 [2024-04-26 16:24:24.555082] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.707 [2024-04-26 16:24:24.641645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:16.642 16:24:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:16.642 16:24:25 -- common/autotest_common.sh@850 -- # return 0 00:13:16.642 16:24:25 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:16.642 Nvme0n1 00:13:16.642 16:24:25 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:16.901 [ 00:13:16.901 { 00:13:16.901 "name": "Nvme0n1", 00:13:16.901 "aliases": [ 00:13:16.901 "7df8a137-efa0-4856-a0a3-a4838c673d0d" 00:13:16.901 ], 00:13:16.901 "product_name": "NVMe disk", 00:13:16.901 "block_size": 4096, 00:13:16.901 "num_blocks": 38912, 00:13:16.901 "uuid": "7df8a137-efa0-4856-a0a3-a4838c673d0d", 00:13:16.901 "assigned_rate_limits": { 00:13:16.901 "rw_ios_per_sec": 0, 00:13:16.901 "rw_mbytes_per_sec": 0, 00:13:16.901 "r_mbytes_per_sec": 0, 00:13:16.901 "w_mbytes_per_sec": 0 00:13:16.901 }, 00:13:16.901 "claimed": false, 00:13:16.901 "zoned": false, 00:13:16.901 "supported_io_types": { 00:13:16.901 "read": true, 00:13:16.901 "write": true, 00:13:16.901 "unmap": true, 00:13:16.901 "write_zeroes": true, 00:13:16.901 "flush": true, 00:13:16.901 "reset": true, 00:13:16.901 "compare": true, 00:13:16.901 "compare_and_write": true, 00:13:16.901 "abort": true, 00:13:16.901 "nvme_admin": true, 00:13:16.901 "nvme_io": true 00:13:16.901 }, 00:13:16.901 "memory_domains": [ 00:13:16.901 { 00:13:16.901 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:16.901 "dma_device_type": 0 00:13:16.901 } 00:13:16.901 ], 00:13:16.901 "driver_specific": { 00:13:16.901 "nvme": [ 00:13:16.901 { 00:13:16.901 "trid": { 00:13:16.901 "trtype": "RDMA", 00:13:16.901 "adrfam": "IPv4", 00:13:16.901 "traddr": "192.168.100.8", 00:13:16.901 "trsvcid": "4420", 00:13:16.901 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:16.901 }, 00:13:16.901 "ctrlr_data": { 00:13:16.901 "cntlid": 1, 00:13:16.901 "vendor_id": "0x8086", 00:13:16.901 "model_number": "SPDK bdev Controller", 00:13:16.901 "serial_number": "SPDK0", 00:13:16.901 "firmware_revision": "24.05", 00:13:16.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:16.901 "oacs": { 00:13:16.901 "security": 0, 00:13:16.901 "format": 0, 00:13:16.901 "firmware": 0, 00:13:16.901 "ns_manage": 0 00:13:16.901 }, 00:13:16.901 "multi_ctrlr": true, 00:13:16.901 "ana_reporting": false 00:13:16.901 }, 00:13:16.901 "vs": { 00:13:16.901 "nvme_version": "1.3" 00:13:16.901 }, 00:13:16.901 "ns_data": { 00:13:16.901 "id": 1, 00:13:16.901 "can_share": true 00:13:16.901 } 00:13:16.901 } 00:13:16.901 ], 00:13:16.901 "mp_policy": "active_passive" 00:13:16.901 } 00:13:16.901 } 00:13:16.901 ] 00:13:16.901 16:24:25 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=437157 00:13:16.901 16:24:25 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:16.901 16:24:25 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:16.901 Running I/O for 10 seconds... 00:13:17.836 Latency(us) 00:13:17.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.836 Nvme0n1 : 1.00 35206.00 137.52 0.00 0.00 0.00 0.00 0.00 00:13:17.836 =================================================================================================================== 00:13:17.836 Total : 35206.00 137.52 0.00 0.00 0.00 0.00 0.00 00:13:17.836 00:13:18.773 16:24:27 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:19.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.032 Nvme0n1 : 2.00 35582.00 138.99 0.00 0.00 0.00 0.00 0.00 00:13:19.033 =================================================================================================================== 00:13:19.033 Total : 35582.00 138.99 0.00 0.00 0.00 0.00 0.00 00:13:19.033 00:13:19.033 true 00:13:19.033 16:24:27 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:19.033 16:24:27 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:19.292 16:24:28 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:19.292 16:24:28 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:19.292 16:24:28 -- target/nvmf_lvs_grow.sh@65 -- # wait 437157 00:13:19.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.860 Nvme0n1 : 3.00 35735.00 139.59 0.00 0.00 0.00 0.00 0.00 00:13:19.860 =================================================================================================================== 00:13:19.860 Total : 35735.00 139.59 0.00 0.00 0.00 0.00 0.00 00:13:19.860 00:13:21.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.240 Nvme0n1 : 4.00 35879.50 140.15 0.00 0.00 0.00 0.00 0.00 00:13:21.240 =================================================================================================================== 00:13:21.240 Total : 35879.50 140.15 0.00 0.00 0.00 0.00 0.00 00:13:21.240 00:13:22.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.174 Nvme0n1 : 5.00 35890.20 140.20 0.00 0.00 0.00 0.00 0.00 00:13:22.174 =================================================================================================================== 00:13:22.174 Total : 35890.20 140.20 0.00 0.00 0.00 0.00 0.00 00:13:22.174 00:13:23.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.108 Nvme0n1 : 6.00 35947.50 140.42 0.00 0.00 0.00 0.00 0.00 00:13:23.108 =================================================================================================================== 00:13:23.108 Total : 35947.50 140.42 0.00 0.00 0.00 0.00 0.00 00:13:23.108 00:13:24.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.044 Nvme0n1 : 7.00 36010.43 140.67 0.00 0.00 0.00 0.00 0.00 00:13:24.044 =================================================================================================================== 00:13:24.044 Total : 36010.43 140.67 0.00 0.00 0.00 0.00 0.00 00:13:24.044 00:13:24.978 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.978 Nvme0n1 : 8.00 36011.12 140.67 0.00 0.00 0.00 0.00 0.00 00:13:24.978 =================================================================================================================== 00:13:24.978 Total : 36011.12 140.67 0.00 0.00 0.00 0.00 0.00 00:13:24.978 00:13:25.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.912 Nvme0n1 : 9.00 36057.78 140.85 0.00 0.00 0.00 0.00 0.00 00:13:25.912 =================================================================================================================== 00:13:25.912 Total : 36057.78 140.85 0.00 0.00 0.00 0.00 0.00 00:13:25.912 00:13:26.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.849 Nvme0n1 : 10.00 36081.10 140.94 0.00 0.00 0.00 0.00 0.00 00:13:26.849 =================================================================================================================== 00:13:26.849 Total : 36081.10 140.94 0.00 0.00 0.00 0.00 0.00 00:13:26.849 00:13:26.849 00:13:26.849 Latency(us) 00:13:26.849 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:26.849 Nvme0n1 : 10.00 36080.49 140.94 0.00 0.00 3544.87 2322.25 16526.47 00:13:26.849 =================================================================================================================== 00:13:26.849 Total : 36080.49 140.94 0.00 0.00 3544.87 2322.25 16526.47 00:13:26.849 0 00:13:27.108 16:24:35 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 436966 00:13:27.108 16:24:35 -- common/autotest_common.sh@936 -- # '[' -z 436966 ']' 00:13:27.108 16:24:35 -- common/autotest_common.sh@940 -- # kill -0 436966 00:13:27.108 16:24:35 -- common/autotest_common.sh@941 -- # uname 00:13:27.108 16:24:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:27.108 16:24:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 436966 00:13:27.108 16:24:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:27.108 16:24:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:27.108 16:24:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 436966' 00:13:27.108 killing process with pid 436966 00:13:27.108 16:24:35 -- common/autotest_common.sh@955 -- # kill 436966 00:13:27.108 Received shutdown signal, test time was about 10.000000 seconds 00:13:27.108 00:13:27.108 Latency(us) 00:13:27.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.108 =================================================================================================================== 00:13:27.108 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:27.108 16:24:35 -- common/autotest_common.sh@960 -- # wait 436966 00:13:27.367 16:24:36 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:27.367 16:24:36 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:27.367 16:24:36 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:27.627 16:24:36 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:27.627 16:24:36 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:13:27.627 16:24:36 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:27.887 [2024-04-26 16:24:36.728895] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:27.887 16:24:36 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:27.887 16:24:36 -- common/autotest_common.sh@638 -- # local es=0 00:13:27.887 16:24:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:27.887 16:24:36 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.887 16:24:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:27.887 16:24:36 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.887 16:24:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:27.887 16:24:36 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.887 16:24:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:27.887 16:24:36 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:27.887 16:24:36 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:27.887 16:24:36 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:28.148 request: 00:13:28.148 { 00:13:28.148 "uuid": "4f0b9408-1b85-4704-a330-b67c3d0bf49c", 00:13:28.148 "method": "bdev_lvol_get_lvstores", 00:13:28.148 "req_id": 1 00:13:28.148 } 00:13:28.148 Got JSON-RPC error response 00:13:28.148 response: 00:13:28.148 { 00:13:28.148 "code": -19, 00:13:28.148 "message": "No such device" 00:13:28.148 } 00:13:28.148 16:24:36 -- common/autotest_common.sh@641 -- # es=1 00:13:28.148 16:24:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:28.148 16:24:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:28.148 16:24:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:28.148 16:24:36 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:28.148 aio_bdev 00:13:28.148 16:24:37 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7df8a137-efa0-4856-a0a3-a4838c673d0d 00:13:28.148 16:24:37 -- common/autotest_common.sh@885 -- # local bdev_name=7df8a137-efa0-4856-a0a3-a4838c673d0d 00:13:28.148 16:24:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:28.148 16:24:37 -- common/autotest_common.sh@887 -- # local i 00:13:28.148 16:24:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:28.148 16:24:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:28.148 16:24:37 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:28.408 16:24:37 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7df8a137-efa0-4856-a0a3-a4838c673d0d -t 2000 00:13:28.669 [ 00:13:28.669 { 00:13:28.669 "name": "7df8a137-efa0-4856-a0a3-a4838c673d0d", 00:13:28.669 "aliases": [ 00:13:28.669 "lvs/lvol" 00:13:28.669 ], 00:13:28.669 "product_name": "Logical Volume", 00:13:28.669 "block_size": 4096, 00:13:28.669 "num_blocks": 38912, 00:13:28.669 "uuid": "7df8a137-efa0-4856-a0a3-a4838c673d0d", 00:13:28.669 "assigned_rate_limits": { 00:13:28.669 "rw_ios_per_sec": 0, 00:13:28.669 "rw_mbytes_per_sec": 0, 00:13:28.669 "r_mbytes_per_sec": 0, 00:13:28.669 "w_mbytes_per_sec": 0 00:13:28.669 }, 00:13:28.669 "claimed": false, 00:13:28.669 "zoned": false, 00:13:28.669 "supported_io_types": { 00:13:28.669 "read": true, 00:13:28.669 "write": true, 00:13:28.669 "unmap": true, 00:13:28.669 "write_zeroes": true, 00:13:28.669 "flush": false, 00:13:28.669 "reset": true, 00:13:28.669 "compare": false, 00:13:28.669 "compare_and_write": false, 00:13:28.669 "abort": false, 00:13:28.669 "nvme_admin": false, 00:13:28.669 "nvme_io": false 00:13:28.669 }, 00:13:28.669 "driver_specific": { 00:13:28.669 "lvol": { 00:13:28.669 "lvol_store_uuid": "4f0b9408-1b85-4704-a330-b67c3d0bf49c", 00:13:28.669 "base_bdev": "aio_bdev", 00:13:28.669 "thin_provision": false, 00:13:28.669 "snapshot": false, 00:13:28.669 "clone": false, 00:13:28.669 "esnap_clone": false 00:13:28.669 } 00:13:28.669 } 00:13:28.669 } 00:13:28.669 ] 00:13:28.669 16:24:37 -- common/autotest_common.sh@893 -- # return 0 00:13:28.669 16:24:37 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:28.669 16:24:37 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:13:28.669 16:24:37 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:13:28.669 16:24:37 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:28.669 16:24:37 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:13:28.929 16:24:37 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:13:28.930 16:24:37 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7df8a137-efa0-4856-a0a3-a4838c673d0d 00:13:29.190 16:24:37 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f0b9408-1b85-4704-a330-b67c3d0bf49c 00:13:29.190 16:24:38 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:29.448 16:24:38 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.448 00:13:29.448 real 0m15.750s 00:13:29.448 user 0m15.613s 00:13:29.448 sys 0m1.268s 00:13:29.448 16:24:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:29.448 16:24:38 -- common/autotest_common.sh@10 -- # set +x 00:13:29.448 ************************************ 00:13:29.448 END TEST lvs_grow_clean 00:13:29.448 ************************************ 00:13:29.448 16:24:38 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:29.448 16:24:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:29.448 16:24:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:29.448 16:24:38 -- common/autotest_common.sh@10 -- # set +x 00:13:29.707 ************************************ 00:13:29.707 START TEST lvs_grow_dirty 00:13:29.707 ************************************ 00:13:29.707 16:24:38 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.707 16:24:38 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:29.966 16:24:38 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:29.966 16:24:38 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:29.966 16:24:38 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:29.966 16:24:38 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:29.966 16:24:38 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:30.225 16:24:39 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:30.225 16:24:39 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:30.225 16:24:39 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c7de911d-b296-44f3-a916-2b4b9ab1beec lvol 150 00:13:30.484 16:24:39 -- target/nvmf_lvs_grow.sh@33 -- # lvol=6d131bb9-869e-4a78-a90f-2560e6a8a658 00:13:30.484 16:24:39 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:30.484 16:24:39 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:30.484 [2024-04-26 16:24:39.462768] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:30.484 [2024-04-26 16:24:39.462822] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:30.484 true 00:13:30.484 16:24:39 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:30.484 16:24:39 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:30.743 16:24:39 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:30.743 16:24:39 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:31.003 16:24:39 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d131bb9-869e-4a78-a90f-2560e6a8a658 00:13:31.003 16:24:39 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:31.262 16:24:40 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:31.521 16:24:40 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=439049 00:13:31.521 16:24:40 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:31.521 16:24:40 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:31.521 16:24:40 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 439049 /var/tmp/bdevperf.sock 00:13:31.521 16:24:40 -- common/autotest_common.sh@817 -- # '[' -z 439049 ']' 00:13:31.521 16:24:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.521 16:24:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:31.521 16:24:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.521 16:24:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:31.521 16:24:40 -- common/autotest_common.sh@10 -- # set +x 00:13:31.521 [2024-04-26 16:24:40.378260] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:31.521 [2024-04-26 16:24:40.378327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439049 ] 00:13:31.521 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.521 [2024-04-26 16:24:40.451446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.521 [2024-04-26 16:24:40.529322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.459 16:24:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:32.459 16:24:41 -- common/autotest_common.sh@850 -- # return 0 00:13:32.459 16:24:41 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:32.459 Nvme0n1 00:13:32.459 16:24:41 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:32.719 [ 00:13:32.719 { 00:13:32.719 "name": "Nvme0n1", 00:13:32.719 "aliases": [ 00:13:32.719 "6d131bb9-869e-4a78-a90f-2560e6a8a658" 00:13:32.719 ], 00:13:32.719 "product_name": "NVMe disk", 00:13:32.719 "block_size": 4096, 00:13:32.719 "num_blocks": 38912, 00:13:32.719 "uuid": "6d131bb9-869e-4a78-a90f-2560e6a8a658", 00:13:32.719 "assigned_rate_limits": { 00:13:32.719 "rw_ios_per_sec": 0, 00:13:32.719 "rw_mbytes_per_sec": 0, 00:13:32.719 "r_mbytes_per_sec": 0, 00:13:32.719 "w_mbytes_per_sec": 0 00:13:32.719 }, 00:13:32.719 "claimed": false, 00:13:32.719 "zoned": false, 00:13:32.719 "supported_io_types": { 00:13:32.719 "read": true, 00:13:32.719 "write": true, 00:13:32.719 "unmap": true, 00:13:32.719 "write_zeroes": true, 00:13:32.719 "flush": true, 00:13:32.719 "reset": true, 00:13:32.719 "compare": true, 00:13:32.719 "compare_and_write": true, 00:13:32.719 "abort": true, 00:13:32.719 "nvme_admin": true, 00:13:32.719 "nvme_io": true 00:13:32.719 }, 00:13:32.719 "memory_domains": [ 00:13:32.719 { 00:13:32.719 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:13:32.719 "dma_device_type": 0 00:13:32.719 } 00:13:32.719 ], 00:13:32.719 "driver_specific": { 00:13:32.719 "nvme": [ 00:13:32.719 { 00:13:32.719 "trid": { 00:13:32.719 "trtype": "RDMA", 00:13:32.719 "adrfam": "IPv4", 00:13:32.719 "traddr": "192.168.100.8", 00:13:32.719 "trsvcid": "4420", 00:13:32.719 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:32.719 }, 00:13:32.719 "ctrlr_data": { 00:13:32.719 "cntlid": 1, 00:13:32.719 "vendor_id": "0x8086", 00:13:32.719 "model_number": "SPDK bdev Controller", 00:13:32.719 "serial_number": "SPDK0", 00:13:32.719 "firmware_revision": "24.05", 00:13:32.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:32.719 "oacs": { 00:13:32.719 "security": 0, 00:13:32.719 "format": 0, 00:13:32.719 "firmware": 0, 00:13:32.719 "ns_manage": 0 00:13:32.719 }, 00:13:32.719 "multi_ctrlr": true, 00:13:32.719 "ana_reporting": false 00:13:32.719 }, 00:13:32.719 "vs": { 00:13:32.719 "nvme_version": "1.3" 00:13:32.719 }, 00:13:32.719 "ns_data": { 00:13:32.719 "id": 1, 00:13:32.719 "can_share": true 00:13:32.719 } 00:13:32.719 } 00:13:32.719 ], 00:13:32.719 "mp_policy": "active_passive" 00:13:32.719 } 00:13:32.719 } 00:13:32.719 ] 00:13:32.719 16:24:41 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=439235 00:13:32.719 16:24:41 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:32.719 16:24:41 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.719 Running I/O for 10 seconds... 00:13:34.097 Latency(us) 00:13:34.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.097 Nvme0n1 : 1.00 35328.00 138.00 0.00 0.00 0.00 0.00 0.00 00:13:34.097 =================================================================================================================== 00:13:34.097 Total : 35328.00 138.00 0.00 0.00 0.00 0.00 0.00 00:13:34.097 00:13:34.664 16:24:43 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:34.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.923 Nvme0n1 : 2.00 35618.50 139.13 0.00 0.00 0.00 0.00 0.00 00:13:34.923 =================================================================================================================== 00:13:34.923 Total : 35618.50 139.13 0.00 0.00 0.00 0.00 0.00 00:13:34.923 00:13:34.923 true 00:13:34.923 16:24:43 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:34.923 16:24:43 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:35.181 16:24:43 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:35.181 16:24:43 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:35.181 16:24:43 -- target/nvmf_lvs_grow.sh@65 -- # wait 439235 00:13:35.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.749 Nvme0n1 : 3.00 35669.33 139.33 0.00 0.00 0.00 0.00 0.00 00:13:35.749 =================================================================================================================== 00:13:35.749 Total : 35669.33 139.33 0.00 0.00 0.00 0.00 0.00 00:13:35.749 00:13:36.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.686 Nvme0n1 : 4.00 35810.75 139.89 0.00 0.00 0.00 0.00 0.00 00:13:36.686 =================================================================================================================== 00:13:36.686 Total : 35810.75 139.89 0.00 0.00 0.00 0.00 0.00 00:13:36.686 00:13:38.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.066 Nvme0n1 : 5.00 35893.00 140.21 0.00 0.00 0.00 0.00 0.00 00:13:38.066 =================================================================================================================== 00:13:38.066 Total : 35893.00 140.21 0.00 0.00 0.00 0.00 0.00 00:13:38.066 00:13:39.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.003 Nvme0n1 : 6.00 35961.83 140.48 0.00 0.00 0.00 0.00 0.00 00:13:39.003 =================================================================================================================== 00:13:39.003 Total : 35961.83 140.48 0.00 0.00 0.00 0.00 0.00 00:13:39.003 00:13:39.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.940 Nvme0n1 : 7.00 36017.29 140.69 0.00 0.00 0.00 0.00 0.00 00:13:39.940 =================================================================================================================== 00:13:39.940 Total : 36017.29 140.69 0.00 0.00 0.00 0.00 0.00 00:13:39.940 00:13:40.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.877 Nvme0n1 : 8.00 36055.25 140.84 0.00 0.00 0.00 0.00 0.00 00:13:40.878 =================================================================================================================== 00:13:40.878 Total : 36055.25 140.84 0.00 0.00 0.00 0.00 0.00 00:13:40.878 00:13:41.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.815 Nvme0n1 : 9.00 35988.67 140.58 0.00 0.00 0.00 0.00 0.00 00:13:41.815 =================================================================================================================== 00:13:41.815 Total : 35988.67 140.58 0.00 0.00 0.00 0.00 0.00 00:13:41.815 00:13:42.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.753 Nvme0n1 : 10.00 35997.60 140.62 0.00 0.00 0.00 0.00 0.00 00:13:42.753 =================================================================================================================== 00:13:42.753 Total : 35997.60 140.62 0.00 0.00 0.00 0.00 0.00 00:13:42.753 00:13:42.753 00:13:42.753 Latency(us) 00:13:42.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.753 Nvme0n1 : 10.00 35997.40 140.61 0.00 0.00 3553.02 2236.77 8890.10 00:13:42.753 =================================================================================================================== 00:13:42.753 Total : 35997.40 140.61 0.00 0.00 3553.02 2236.77 8890.10 00:13:42.753 0 00:13:42.753 16:24:51 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 439049 00:13:42.753 16:24:51 -- common/autotest_common.sh@936 -- # '[' -z 439049 ']' 00:13:42.753 16:24:51 -- common/autotest_common.sh@940 -- # kill -0 439049 00:13:42.753 16:24:51 -- common/autotest_common.sh@941 -- # uname 00:13:42.753 16:24:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.753 16:24:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 439049 00:13:42.753 16:24:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:42.753 16:24:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:43.013 16:24:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 439049' 00:13:43.013 killing process with pid 439049 00:13:43.013 16:24:51 -- common/autotest_common.sh@955 -- # kill 439049 00:13:43.013 Received shutdown signal, test time was about 10.000000 seconds 00:13:43.013 00:13:43.013 Latency(us) 00:13:43.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.013 =================================================================================================================== 00:13:43.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.013 16:24:51 -- common/autotest_common.sh@960 -- # wait 439049 00:13:43.013 16:24:51 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:43.272 16:24:52 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:43.272 16:24:52 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:13:43.532 16:24:52 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:13:43.532 16:24:52 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:13:43.532 16:24:52 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 436477 00:13:43.532 16:24:52 -- target/nvmf_lvs_grow.sh@74 -- # wait 436477 00:13:43.532 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 436477 Killed "${NVMF_APP[@]}" "$@" 00:13:43.532 16:24:52 -- target/nvmf_lvs_grow.sh@74 -- # true 00:13:43.532 16:24:52 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:13:43.532 16:24:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:43.532 16:24:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:43.532 16:24:52 -- common/autotest_common.sh@10 -- # set +x 00:13:43.532 16:24:52 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:43.532 16:24:52 -- nvmf/common.sh@470 -- # nvmfpid=440694 00:13:43.532 16:24:52 -- nvmf/common.sh@471 -- # waitforlisten 440694 00:13:43.532 16:24:52 -- common/autotest_common.sh@817 -- # '[' -z 440694 ']' 00:13:43.532 16:24:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.532 16:24:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:43.532 16:24:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.532 16:24:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:43.532 16:24:52 -- common/autotest_common.sh@10 -- # set +x 00:13:43.532 [2024-04-26 16:24:52.438965] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:43.532 [2024-04-26 16:24:52.439030] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.532 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.532 [2024-04-26 16:24:52.514295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.791 [2024-04-26 16:24:52.598674] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.791 [2024-04-26 16:24:52.598715] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.791 [2024-04-26 16:24:52.598723] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.791 [2024-04-26 16:24:52.598747] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.791 [2024-04-26 16:24:52.598754] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.791 [2024-04-26 16:24:52.598781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.367 16:24:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:44.367 16:24:53 -- common/autotest_common.sh@850 -- # return 0 00:13:44.367 16:24:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:44.367 16:24:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:44.367 16:24:53 -- common/autotest_common.sh@10 -- # set +x 00:13:44.367 16:24:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.367 16:24:53 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.627 [2024-04-26 16:24:53.462409] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:44.627 [2024-04-26 16:24:53.462495] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:44.627 [2024-04-26 16:24:53.462523] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:44.627 16:24:53 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:13:44.627 16:24:53 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 6d131bb9-869e-4a78-a90f-2560e6a8a658 00:13:44.627 16:24:53 -- common/autotest_common.sh@885 -- # local bdev_name=6d131bb9-869e-4a78-a90f-2560e6a8a658 00:13:44.627 16:24:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:44.627 16:24:53 -- common/autotest_common.sh@887 -- # local i 00:13:44.627 16:24:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:44.627 16:24:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:44.627 16:24:53 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:44.886 16:24:53 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6d131bb9-869e-4a78-a90f-2560e6a8a658 -t 2000 00:13:44.886 [ 00:13:44.886 { 00:13:44.886 "name": "6d131bb9-869e-4a78-a90f-2560e6a8a658", 00:13:44.886 "aliases": [ 00:13:44.886 "lvs/lvol" 00:13:44.886 ], 00:13:44.886 "product_name": "Logical Volume", 00:13:44.886 "block_size": 4096, 00:13:44.886 "num_blocks": 38912, 00:13:44.886 "uuid": "6d131bb9-869e-4a78-a90f-2560e6a8a658", 00:13:44.886 "assigned_rate_limits": { 00:13:44.886 "rw_ios_per_sec": 0, 00:13:44.886 "rw_mbytes_per_sec": 0, 00:13:44.886 "r_mbytes_per_sec": 0, 00:13:44.886 "w_mbytes_per_sec": 0 00:13:44.886 }, 00:13:44.886 "claimed": false, 00:13:44.886 "zoned": false, 00:13:44.886 "supported_io_types": { 00:13:44.886 "read": true, 00:13:44.886 "write": true, 00:13:44.886 "unmap": true, 00:13:44.886 "write_zeroes": true, 00:13:44.886 "flush": false, 00:13:44.886 "reset": true, 00:13:44.886 "compare": false, 00:13:44.886 "compare_and_write": false, 00:13:44.886 "abort": false, 00:13:44.886 "nvme_admin": false, 00:13:44.886 "nvme_io": false 00:13:44.886 }, 00:13:44.886 "driver_specific": { 00:13:44.886 "lvol": { 00:13:44.886 "lvol_store_uuid": "c7de911d-b296-44f3-a916-2b4b9ab1beec", 00:13:44.886 "base_bdev": "aio_bdev", 00:13:44.886 "thin_provision": false, 00:13:44.886 "snapshot": false, 00:13:44.886 "clone": false, 00:13:44.886 "esnap_clone": false 00:13:44.886 } 00:13:44.886 } 00:13:44.886 } 00:13:44.886 ] 00:13:44.886 16:24:53 -- common/autotest_common.sh@893 -- # return 0 00:13:44.886 16:24:53 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:44.886 16:24:53 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:13:45.145 16:24:54 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:13:45.145 16:24:54 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:45.145 16:24:54 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:13:45.403 16:24:54 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:13:45.403 16:24:54 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:45.403 [2024-04-26 16:24:54.342655] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:45.403 16:24:54 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:45.403 16:24:54 -- common/autotest_common.sh@638 -- # local es=0 00:13:45.403 16:24:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:45.403 16:24:54 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:45.403 16:24:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:45.403 16:24:54 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:45.403 16:24:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:45.403 16:24:54 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:45.403 16:24:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:45.403 16:24:54 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:45.403 16:24:54 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:45.403 16:24:54 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:45.661 request: 00:13:45.661 { 00:13:45.661 "uuid": "c7de911d-b296-44f3-a916-2b4b9ab1beec", 00:13:45.661 "method": "bdev_lvol_get_lvstores", 00:13:45.661 "req_id": 1 00:13:45.661 } 00:13:45.661 Got JSON-RPC error response 00:13:45.661 response: 00:13:45.661 { 00:13:45.661 "code": -19, 00:13:45.661 "message": "No such device" 00:13:45.661 } 00:13:45.661 16:24:54 -- common/autotest_common.sh@641 -- # es=1 00:13:45.661 16:24:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:45.661 16:24:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:45.661 16:24:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:45.661 16:24:54 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:45.920 aio_bdev 00:13:45.920 16:24:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 6d131bb9-869e-4a78-a90f-2560e6a8a658 00:13:45.920 16:24:54 -- common/autotest_common.sh@885 -- # local bdev_name=6d131bb9-869e-4a78-a90f-2560e6a8a658 00:13:45.920 16:24:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:45.920 16:24:54 -- common/autotest_common.sh@887 -- # local i 00:13:45.920 16:24:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:45.920 16:24:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:45.920 16:24:54 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:45.920 16:24:54 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6d131bb9-869e-4a78-a90f-2560e6a8a658 -t 2000 00:13:46.179 [ 00:13:46.179 { 00:13:46.179 "name": "6d131bb9-869e-4a78-a90f-2560e6a8a658", 00:13:46.179 "aliases": [ 00:13:46.179 "lvs/lvol" 00:13:46.179 ], 00:13:46.179 "product_name": "Logical Volume", 00:13:46.179 "block_size": 4096, 00:13:46.179 "num_blocks": 38912, 00:13:46.179 "uuid": "6d131bb9-869e-4a78-a90f-2560e6a8a658", 00:13:46.179 "assigned_rate_limits": { 00:13:46.179 "rw_ios_per_sec": 0, 00:13:46.179 "rw_mbytes_per_sec": 0, 00:13:46.179 "r_mbytes_per_sec": 0, 00:13:46.179 "w_mbytes_per_sec": 0 00:13:46.179 }, 00:13:46.179 "claimed": false, 00:13:46.179 "zoned": false, 00:13:46.179 "supported_io_types": { 00:13:46.179 "read": true, 00:13:46.179 "write": true, 00:13:46.179 "unmap": true, 00:13:46.179 "write_zeroes": true, 00:13:46.179 "flush": false, 00:13:46.179 "reset": true, 00:13:46.179 "compare": false, 00:13:46.179 "compare_and_write": false, 00:13:46.179 "abort": false, 00:13:46.179 "nvme_admin": false, 00:13:46.179 "nvme_io": false 00:13:46.179 }, 00:13:46.179 "driver_specific": { 00:13:46.179 "lvol": { 00:13:46.179 "lvol_store_uuid": "c7de911d-b296-44f3-a916-2b4b9ab1beec", 00:13:46.179 "base_bdev": "aio_bdev", 00:13:46.179 "thin_provision": false, 00:13:46.179 "snapshot": false, 00:13:46.179 "clone": false, 00:13:46.179 "esnap_clone": false 00:13:46.179 } 00:13:46.179 } 00:13:46.179 } 00:13:46.179 ] 00:13:46.179 16:24:55 -- common/autotest_common.sh@893 -- # return 0 00:13:46.179 16:24:55 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:46.179 16:24:55 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:13:46.438 16:24:55 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:13:46.438 16:24:55 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:46.438 16:24:55 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:13:46.438 16:24:55 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:13:46.438 16:24:55 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6d131bb9-869e-4a78-a90f-2560e6a8a658 00:13:46.697 16:24:55 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7de911d-b296-44f3-a916-2b4b9ab1beec 00:13:46.956 16:24:55 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:46.956 16:24:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:46.956 00:13:46.956 real 0m17.344s 00:13:46.956 user 0m45.261s 00:13:46.956 sys 0m3.479s 00:13:46.956 16:24:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:46.956 16:24:55 -- common/autotest_common.sh@10 -- # set +x 00:13:46.956 ************************************ 00:13:46.956 END TEST lvs_grow_dirty 00:13:46.956 ************************************ 00:13:46.956 16:24:55 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:47.215 16:24:55 -- common/autotest_common.sh@794 -- # type=--id 00:13:47.215 16:24:55 -- common/autotest_common.sh@795 -- # id=0 00:13:47.215 16:24:55 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:13:47.215 16:24:55 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:47.215 16:24:55 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:13:47.215 16:24:55 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:13:47.215 16:24:55 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:13:47.215 16:24:55 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:47.215 nvmf_trace.0 00:13:47.215 16:24:56 -- common/autotest_common.sh@809 -- # return 0 00:13:47.215 16:24:56 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:47.215 16:24:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:47.215 16:24:56 -- nvmf/common.sh@117 -- # sync 00:13:47.215 16:24:56 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:47.215 16:24:56 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:47.215 16:24:56 -- nvmf/common.sh@120 -- # set +e 00:13:47.215 16:24:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.215 16:24:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:47.215 rmmod nvme_rdma 00:13:47.216 rmmod nvme_fabrics 00:13:47.216 16:24:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.216 16:24:56 -- nvmf/common.sh@124 -- # set -e 00:13:47.216 16:24:56 -- nvmf/common.sh@125 -- # return 0 00:13:47.216 16:24:56 -- nvmf/common.sh@478 -- # '[' -n 440694 ']' 00:13:47.216 16:24:56 -- nvmf/common.sh@479 -- # killprocess 440694 00:13:47.216 16:24:56 -- common/autotest_common.sh@936 -- # '[' -z 440694 ']' 00:13:47.216 16:24:56 -- common/autotest_common.sh@940 -- # kill -0 440694 00:13:47.216 16:24:56 -- common/autotest_common.sh@941 -- # uname 00:13:47.216 16:24:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:47.216 16:24:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 440694 00:13:47.216 16:24:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:47.216 16:24:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:47.216 16:24:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 440694' 00:13:47.216 killing process with pid 440694 00:13:47.216 16:24:56 -- common/autotest_common.sh@955 -- # kill 440694 00:13:47.216 16:24:56 -- common/autotest_common.sh@960 -- # wait 440694 00:13:47.475 16:24:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:47.475 16:24:56 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:47.475 00:13:47.475 real 0m41.315s 00:13:47.475 user 1m6.958s 00:13:47.475 sys 0m10.148s 00:13:47.475 16:24:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:47.475 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.475 ************************************ 00:13:47.475 END TEST nvmf_lvs_grow 00:13:47.475 ************************************ 00:13:47.475 16:24:56 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:47.475 16:24:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:47.475 16:24:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.475 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:13:47.734 ************************************ 00:13:47.734 START TEST nvmf_bdev_io_wait 00:13:47.734 ************************************ 00:13:47.734 16:24:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:47.734 * Looking for test storage... 00:13:47.734 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:47.734 16:24:56 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:47.734 16:24:56 -- nvmf/common.sh@7 -- # uname -s 00:13:47.734 16:24:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:47.734 16:24:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:47.734 16:24:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:47.734 16:24:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:47.734 16:24:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:47.734 16:24:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:47.734 16:24:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:47.734 16:24:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:47.734 16:24:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:47.734 16:24:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:47.734 16:24:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:47.734 16:24:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:13:47.734 16:24:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:47.734 16:24:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:47.734 16:24:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:47.734 16:24:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:47.734 16:24:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:47.734 16:24:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:47.734 16:24:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:47.734 16:24:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:47.734 16:24:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.734 16:24:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.734 16:24:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.734 16:24:56 -- paths/export.sh@5 -- # export PATH 00:13:47.734 16:24:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:47.734 16:24:56 -- nvmf/common.sh@47 -- # : 0 00:13:47.734 16:24:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:47.734 16:24:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:47.734 16:24:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:47.734 16:24:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:47.734 16:24:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:47.734 16:24:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:47.734 16:24:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:47.734 16:24:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:47.734 16:24:56 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:47.734 16:24:56 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:47.734 16:24:56 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:47.734 16:24:56 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:47.734 16:24:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:47.734 16:24:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:47.734 16:24:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:47.734 16:24:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:47.734 16:24:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.734 16:24:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.734 16:24:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:47.734 16:24:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:47.734 16:24:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:47.734 16:24:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:47.734 16:24:56 -- common/autotest_common.sh@10 -- # set +x 00:13:54.305 16:25:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:54.305 16:25:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.305 16:25:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.305 16:25:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.305 16:25:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.305 16:25:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.305 16:25:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.305 16:25:02 -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.305 16:25:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.305 16:25:02 -- nvmf/common.sh@296 -- # e810=() 00:13:54.305 16:25:02 -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.305 16:25:02 -- nvmf/common.sh@297 -- # x722=() 00:13:54.305 16:25:02 -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.305 16:25:02 -- nvmf/common.sh@298 -- # mlx=() 00:13:54.305 16:25:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.305 16:25:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.305 16:25:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.305 16:25:02 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:54.305 16:25:02 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:54.305 16:25:02 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:54.305 16:25:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.305 16:25:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.305 16:25:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:13:54.305 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:13:54.305 16:25:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:54.305 16:25:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.305 16:25:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:13:54.305 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:13:54.305 16:25:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:54.305 16:25:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.305 16:25:02 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.305 16:25:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.305 16:25:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:54.305 16:25:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.305 16:25:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:54.305 Found net devices under 0000:18:00.0: mlx_0_0 00:13:54.305 16:25:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.305 16:25:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.305 16:25:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.305 16:25:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:54.305 16:25:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.305 16:25:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:54.305 Found net devices under 0000:18:00.1: mlx_0_1 00:13:54.305 16:25:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.305 16:25:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:54.305 16:25:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:54.305 16:25:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@409 -- # rdma_device_init 00:13:54.305 16:25:02 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:13:54.305 16:25:02 -- nvmf/common.sh@58 -- # uname 00:13:54.305 16:25:02 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:54.305 16:25:02 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:54.305 16:25:02 -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:54.305 16:25:02 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:54.305 16:25:02 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:54.305 16:25:02 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:54.305 16:25:02 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:54.305 16:25:02 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:54.305 16:25:02 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:13:54.305 16:25:02 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:54.305 16:25:02 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:54.305 16:25:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:54.305 16:25:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:54.305 16:25:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:54.305 16:25:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:54.305 16:25:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:54.305 16:25:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:54.305 16:25:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.305 16:25:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:54.305 16:25:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:54.306 16:25:02 -- nvmf/common.sh@105 -- # continue 2 00:13:54.306 16:25:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:54.306 16:25:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.306 16:25:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:54.306 16:25:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.306 16:25:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:54.306 16:25:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:54.306 16:25:02 -- nvmf/common.sh@105 -- # continue 2 00:13:54.306 16:25:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:54.306 16:25:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:54.306 16:25:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:54.306 16:25:02 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:54.306 16:25:02 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:54.306 16:25:02 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:54.306 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:54.306 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:13:54.306 altname enp24s0f0np0 00:13:54.306 altname ens785f0np0 00:13:54.306 inet 192.168.100.8/24 scope global mlx_0_0 00:13:54.306 valid_lft forever preferred_lft forever 00:13:54.306 16:25:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:54.306 16:25:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:54.306 16:25:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:54.306 16:25:02 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:54.306 16:25:02 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:54.306 16:25:02 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:54.306 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:54.306 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:13:54.306 altname enp24s0f1np1 00:13:54.306 altname ens785f1np1 00:13:54.306 inet 192.168.100.9/24 scope global mlx_0_1 00:13:54.306 valid_lft forever preferred_lft forever 00:13:54.306 16:25:02 -- nvmf/common.sh@411 -- # return 0 00:13:54.306 16:25:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:54.306 16:25:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:54.306 16:25:02 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:13:54.306 16:25:02 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:13:54.306 16:25:02 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:54.306 16:25:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:54.306 16:25:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:54.306 16:25:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:54.306 16:25:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:54.306 16:25:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:54.306 16:25:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:54.306 16:25:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.306 16:25:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:54.306 16:25:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:54.306 16:25:02 -- nvmf/common.sh@105 -- # continue 2 00:13:54.306 16:25:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:54.306 16:25:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.306 16:25:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:54.306 16:25:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:54.306 16:25:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:54.306 16:25:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:54.306 16:25:02 -- nvmf/common.sh@105 -- # continue 2 00:13:54.306 16:25:02 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:54.306 16:25:02 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:54.306 16:25:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:54.306 16:25:02 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:54.306 16:25:02 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:54.306 16:25:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:54.306 16:25:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:54.306 16:25:02 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:13:54.306 192.168.100.9' 00:13:54.306 16:25:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:13:54.306 192.168.100.9' 00:13:54.306 16:25:02 -- nvmf/common.sh@446 -- # head -n 1 00:13:54.306 16:25:02 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:54.306 16:25:02 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:13:54.306 192.168.100.9' 00:13:54.306 16:25:02 -- nvmf/common.sh@447 -- # tail -n +2 00:13:54.306 16:25:02 -- nvmf/common.sh@447 -- # head -n 1 00:13:54.306 16:25:02 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:54.306 16:25:02 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:13:54.306 16:25:02 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:54.306 16:25:02 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:13:54.306 16:25:02 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:13:54.306 16:25:02 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:13:54.306 16:25:02 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:54.306 16:25:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:54.306 16:25:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:54.306 16:25:02 -- common/autotest_common.sh@10 -- # set +x 00:13:54.306 16:25:02 -- nvmf/common.sh@470 -- # nvmfpid=444060 00:13:54.306 16:25:02 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:54.306 16:25:02 -- nvmf/common.sh@471 -- # waitforlisten 444060 00:13:54.306 16:25:02 -- common/autotest_common.sh@817 -- # '[' -z 444060 ']' 00:13:54.306 16:25:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.306 16:25:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:54.306 16:25:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.306 16:25:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:54.306 16:25:02 -- common/autotest_common.sh@10 -- # set +x 00:13:54.306 [2024-04-26 16:25:02.378973] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:54.306 [2024-04-26 16:25:02.379027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.306 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.306 [2024-04-26 16:25:02.450598] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:54.306 [2024-04-26 16:25:02.536955] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:54.306 [2024-04-26 16:25:02.536994] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:54.306 [2024-04-26 16:25:02.537003] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:54.306 [2024-04-26 16:25:02.537028] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:54.306 [2024-04-26 16:25:02.537036] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:54.306 [2024-04-26 16:25:02.537092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.306 [2024-04-26 16:25:02.537175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:54.306 [2024-04-26 16:25:02.537251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:54.306 [2024-04-26 16:25:02.537253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.306 16:25:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:54.306 16:25:03 -- common/autotest_common.sh@850 -- # return 0 00:13:54.306 16:25:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:54.306 16:25:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:54.306 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:13:54.306 16:25:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.306 16:25:03 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:54.306 16:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.306 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:13:54.306 16:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.306 16:25:03 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:54.306 16:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.306 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:13:54.306 16:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.306 16:25:03 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:54.306 16:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.306 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:13:54.566 [2024-04-26 16:25:03.334875] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdec390/0xdf0880) succeed. 00:13:54.566 [2024-04-26 16:25:03.344972] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xded9d0/0xe31f10) succeed. 00:13:54.566 16:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:54.566 16:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.566 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:13:54.566 Malloc0 00:13:54.566 16:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:54.566 16:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.566 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:13:54.566 16:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:54.566 16:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.566 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:13:54.566 16:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:54.566 16:25:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:54.566 16:25:03 -- common/autotest_common.sh@10 -- # set +x 00:13:54.566 [2024-04-26 16:25:03.531914] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:54.566 16:25:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=444265 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@30 -- # READ_PID=444267 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:54.566 16:25:03 -- nvmf/common.sh@521 -- # config=() 00:13:54.566 16:25:03 -- nvmf/common.sh@521 -- # local subsystem config 00:13:54.566 16:25:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:54.566 16:25:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:54.566 { 00:13:54.566 "params": { 00:13:54.566 "name": "Nvme$subsystem", 00:13:54.566 "trtype": "$TEST_TRANSPORT", 00:13:54.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.566 "adrfam": "ipv4", 00:13:54.566 "trsvcid": "$NVMF_PORT", 00:13:54.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.566 "hdgst": ${hdgst:-false}, 00:13:54.566 "ddgst": ${ddgst:-false} 00:13:54.566 }, 00:13:54.566 "method": "bdev_nvme_attach_controller" 00:13:54.566 } 00:13:54.566 EOF 00:13:54.566 )") 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=444269 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:54.566 16:25:03 -- nvmf/common.sh@521 -- # config=() 00:13:54.566 16:25:03 -- nvmf/common.sh@521 -- # local subsystem config 00:13:54.566 16:25:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:54.566 16:25:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:54.566 { 00:13:54.566 "params": { 00:13:54.566 "name": "Nvme$subsystem", 00:13:54.566 "trtype": "$TEST_TRANSPORT", 00:13:54.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.566 "adrfam": "ipv4", 00:13:54.566 "trsvcid": "$NVMF_PORT", 00:13:54.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.566 "hdgst": ${hdgst:-false}, 00:13:54.566 "ddgst": ${ddgst:-false} 00:13:54.566 }, 00:13:54.566 "method": "bdev_nvme_attach_controller" 00:13:54.566 } 00:13:54.566 EOF 00:13:54.566 )") 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=444272 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:54.566 16:25:03 -- nvmf/common.sh@543 -- # cat 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@35 -- # sync 00:13:54.566 16:25:03 -- nvmf/common.sh@521 -- # config=() 00:13:54.566 16:25:03 -- nvmf/common.sh@521 -- # local subsystem config 00:13:54.566 16:25:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:54.566 16:25:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:54.566 { 00:13:54.566 "params": { 00:13:54.566 "name": "Nvme$subsystem", 00:13:54.566 "trtype": "$TEST_TRANSPORT", 00:13:54.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.566 "adrfam": "ipv4", 00:13:54.566 "trsvcid": "$NVMF_PORT", 00:13:54.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.566 "hdgst": ${hdgst:-false}, 00:13:54.566 "ddgst": ${ddgst:-false} 00:13:54.566 }, 00:13:54.566 "method": "bdev_nvme_attach_controller" 00:13:54.566 } 00:13:54.566 EOF 00:13:54.566 )") 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:54.566 16:25:03 -- nvmf/common.sh@521 -- # config=() 00:13:54.566 16:25:03 -- nvmf/common.sh@543 -- # cat 00:13:54.566 16:25:03 -- nvmf/common.sh@521 -- # local subsystem config 00:13:54.566 16:25:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:54.566 16:25:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:54.566 { 00:13:54.566 "params": { 00:13:54.566 "name": "Nvme$subsystem", 00:13:54.566 "trtype": "$TEST_TRANSPORT", 00:13:54.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.566 "adrfam": "ipv4", 00:13:54.566 "trsvcid": "$NVMF_PORT", 00:13:54.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.566 "hdgst": ${hdgst:-false}, 00:13:54.566 "ddgst": ${ddgst:-false} 00:13:54.566 }, 00:13:54.566 "method": "bdev_nvme_attach_controller" 00:13:54.566 } 00:13:54.566 EOF 00:13:54.566 )") 00:13:54.566 16:25:03 -- nvmf/common.sh@543 -- # cat 00:13:54.566 16:25:03 -- target/bdev_io_wait.sh@37 -- # wait 444265 00:13:54.566 16:25:03 -- nvmf/common.sh@543 -- # cat 00:13:54.566 16:25:03 -- nvmf/common.sh@545 -- # jq . 00:13:54.566 16:25:03 -- nvmf/common.sh@545 -- # jq . 00:13:54.566 16:25:03 -- nvmf/common.sh@546 -- # IFS=, 00:13:54.566 16:25:03 -- nvmf/common.sh@545 -- # jq . 00:13:54.566 16:25:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:54.566 "params": { 00:13:54.566 "name": "Nvme1", 00:13:54.566 "trtype": "rdma", 00:13:54.566 "traddr": "192.168.100.8", 00:13:54.566 "adrfam": "ipv4", 00:13:54.566 "trsvcid": "4420", 00:13:54.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.566 "hdgst": false, 00:13:54.566 "ddgst": false 00:13:54.566 }, 00:13:54.566 "method": "bdev_nvme_attach_controller" 00:13:54.566 }' 00:13:54.566 16:25:03 -- nvmf/common.sh@545 -- # jq . 00:13:54.566 16:25:03 -- nvmf/common.sh@546 -- # IFS=, 00:13:54.566 16:25:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:54.566 "params": { 00:13:54.566 "name": "Nvme1", 00:13:54.566 "trtype": "rdma", 00:13:54.566 "traddr": "192.168.100.8", 00:13:54.566 "adrfam": "ipv4", 00:13:54.566 "trsvcid": "4420", 00:13:54.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.566 "hdgst": false, 00:13:54.566 "ddgst": false 00:13:54.566 }, 00:13:54.566 "method": "bdev_nvme_attach_controller" 00:13:54.566 }' 00:13:54.566 16:25:03 -- nvmf/common.sh@546 -- # IFS=, 00:13:54.566 16:25:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:54.566 "params": { 00:13:54.566 "name": "Nvme1", 00:13:54.566 "trtype": "rdma", 00:13:54.566 "traddr": "192.168.100.8", 00:13:54.566 "adrfam": "ipv4", 00:13:54.566 "trsvcid": "4420", 00:13:54.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.566 "hdgst": false, 00:13:54.566 "ddgst": false 00:13:54.566 }, 00:13:54.566 "method": "bdev_nvme_attach_controller" 00:13:54.566 }' 00:13:54.566 16:25:03 -- nvmf/common.sh@546 -- # IFS=, 00:13:54.566 16:25:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:54.566 "params": { 00:13:54.566 "name": "Nvme1", 00:13:54.566 "trtype": "rdma", 00:13:54.566 "traddr": "192.168.100.8", 00:13:54.566 "adrfam": "ipv4", 00:13:54.566 "trsvcid": "4420", 00:13:54.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:54.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:54.566 "hdgst": false, 00:13:54.566 "ddgst": false 00:13:54.566 }, 00:13:54.566 "method": "bdev_nvme_attach_controller" 00:13:54.566 }' 00:13:54.566 [2024-04-26 16:25:03.583076] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:54.566 [2024-04-26 16:25:03.583144] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:54.566 [2024-04-26 16:25:03.585527] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:54.566 [2024-04-26 16:25:03.585584] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:54.566 [2024-04-26 16:25:03.586313] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:54.566 [2024-04-26 16:25:03.586370] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:54.566 [2024-04-26 16:25:03.586927] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:13:54.566 [2024-04-26 16:25:03.586979] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:54.824 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.824 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.824 [2024-04-26 16:25:03.771711] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.824 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.824 [2024-04-26 16:25:03.847220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:55.083 [2024-04-26 16:25:03.870606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.083 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.083 [2024-04-26 16:25:03.946675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:55.083 [2024-04-26 16:25:03.962285] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.083 [2024-04-26 16:25:04.037615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:13:55.083 [2024-04-26 16:25:04.063009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.342 [2024-04-26 16:25:04.140328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:55.342 Running I/O for 1 seconds... 00:13:55.342 Running I/O for 1 seconds... 00:13:55.342 Running I/O for 1 seconds... 00:13:55.342 Running I/O for 1 seconds... 00:13:56.281 00:13:56.281 Latency(us) 00:13:56.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.281 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:56.281 Nvme1n1 : 1.01 16655.97 65.06 0.00 0.00 7661.35 4559.03 14816.83 00:13:56.281 =================================================================================================================== 00:13:56.281 Total : 16655.97 65.06 0.00 0.00 7661.35 4559.03 14816.83 00:13:56.281 00:13:56.281 Latency(us) 00:13:56.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.281 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:56.281 Nvme1n1 : 1.00 17524.91 68.46 0.00 0.00 7284.38 4587.52 18578.03 00:13:56.281 =================================================================================================================== 00:13:56.281 Total : 17524.91 68.46 0.00 0.00 7284.38 4587.52 18578.03 00:13:56.281 00:13:56.281 Latency(us) 00:13:56.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.281 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:56.281 Nvme1n1 : 1.00 15532.72 60.67 0.00 0.00 8215.77 4986.43 19717.79 00:13:56.281 =================================================================================================================== 00:13:56.281 Total : 15532.72 60.67 0.00 0.00 8215.77 4986.43 19717.79 00:13:56.281 00:13:56.281 Latency(us) 00:13:56.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.281 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:56.281 Nvme1n1 : 1.00 262159.25 1024.06 0.00 0.00 486.52 194.11 1681.14 00:13:56.281 =================================================================================================================== 00:13:56.281 Total : 262159.25 1024.06 0.00 0.00 486.52 194.11 1681.14 00:13:56.851 16:25:05 -- target/bdev_io_wait.sh@38 -- # wait 444267 00:13:56.851 16:25:05 -- target/bdev_io_wait.sh@39 -- # wait 444269 00:13:56.851 16:25:05 -- target/bdev_io_wait.sh@40 -- # wait 444272 00:13:56.851 16:25:05 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.851 16:25:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:56.851 16:25:05 -- common/autotest_common.sh@10 -- # set +x 00:13:56.851 16:25:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:56.851 16:25:05 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:56.851 16:25:05 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:56.851 16:25:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:56.851 16:25:05 -- nvmf/common.sh@117 -- # sync 00:13:56.851 16:25:05 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:56.851 16:25:05 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:56.851 16:25:05 -- nvmf/common.sh@120 -- # set +e 00:13:56.851 16:25:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.851 16:25:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:56.851 rmmod nvme_rdma 00:13:56.851 rmmod nvme_fabrics 00:13:56.851 16:25:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.851 16:25:05 -- nvmf/common.sh@124 -- # set -e 00:13:56.851 16:25:05 -- nvmf/common.sh@125 -- # return 0 00:13:56.851 16:25:05 -- nvmf/common.sh@478 -- # '[' -n 444060 ']' 00:13:56.851 16:25:05 -- nvmf/common.sh@479 -- # killprocess 444060 00:13:56.851 16:25:05 -- common/autotest_common.sh@936 -- # '[' -z 444060 ']' 00:13:56.851 16:25:05 -- common/autotest_common.sh@940 -- # kill -0 444060 00:13:56.851 16:25:05 -- common/autotest_common.sh@941 -- # uname 00:13:56.851 16:25:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:56.851 16:25:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 444060 00:13:56.851 16:25:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:56.851 16:25:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:56.851 16:25:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 444060' 00:13:56.851 killing process with pid 444060 00:13:56.851 16:25:05 -- common/autotest_common.sh@955 -- # kill 444060 00:13:56.851 16:25:05 -- common/autotest_common.sh@960 -- # wait 444060 00:13:56.851 [2024-04-26 16:25:05.808112] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:13:57.111 16:25:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:57.111 16:25:06 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:13:57.111 00:13:57.111 real 0m9.508s 00:13:57.111 user 0m21.082s 00:13:57.111 sys 0m5.848s 00:13:57.111 16:25:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.111 16:25:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.111 ************************************ 00:13:57.111 END TEST nvmf_bdev_io_wait 00:13:57.111 ************************************ 00:13:57.111 16:25:06 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:57.111 16:25:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:57.111 16:25:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.111 16:25:06 -- common/autotest_common.sh@10 -- # set +x 00:13:57.371 ************************************ 00:13:57.371 START TEST nvmf_queue_depth 00:13:57.371 ************************************ 00:13:57.371 16:25:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:57.371 * Looking for test storage... 00:13:57.371 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:57.371 16:25:06 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:57.371 16:25:06 -- nvmf/common.sh@7 -- # uname -s 00:13:57.371 16:25:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:57.371 16:25:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:57.371 16:25:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:57.371 16:25:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:57.371 16:25:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:57.371 16:25:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:57.371 16:25:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:57.371 16:25:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:57.371 16:25:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:57.371 16:25:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:57.371 16:25:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:57.371 16:25:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:13:57.371 16:25:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:57.371 16:25:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:57.371 16:25:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:57.371 16:25:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:57.371 16:25:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:57.371 16:25:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:57.371 16:25:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:57.371 16:25:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:57.371 16:25:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.371 16:25:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.371 16:25:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.371 16:25:06 -- paths/export.sh@5 -- # export PATH 00:13:57.371 16:25:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:57.371 16:25:06 -- nvmf/common.sh@47 -- # : 0 00:13:57.371 16:25:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:57.371 16:25:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:57.372 16:25:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:57.372 16:25:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:57.372 16:25:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:57.372 16:25:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:57.372 16:25:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:57.372 16:25:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:57.372 16:25:06 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:57.372 16:25:06 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:57.372 16:25:06 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:57.372 16:25:06 -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:57.372 16:25:06 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:13:57.372 16:25:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:57.372 16:25:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:57.372 16:25:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:57.372 16:25:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:57.372 16:25:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.372 16:25:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.372 16:25:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.372 16:25:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:57.372 16:25:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:57.372 16:25:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:57.372 16:25:06 -- common/autotest_common.sh@10 -- # set +x 00:14:03.941 16:25:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:03.941 16:25:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:03.941 16:25:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:03.941 16:25:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:03.941 16:25:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:03.941 16:25:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:03.941 16:25:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:03.941 16:25:12 -- nvmf/common.sh@295 -- # net_devs=() 00:14:03.941 16:25:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:03.941 16:25:12 -- nvmf/common.sh@296 -- # e810=() 00:14:03.941 16:25:12 -- nvmf/common.sh@296 -- # local -ga e810 00:14:03.941 16:25:12 -- nvmf/common.sh@297 -- # x722=() 00:14:03.941 16:25:12 -- nvmf/common.sh@297 -- # local -ga x722 00:14:03.941 16:25:12 -- nvmf/common.sh@298 -- # mlx=() 00:14:03.941 16:25:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:03.941 16:25:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.941 16:25:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:03.941 16:25:12 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:03.941 16:25:12 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:03.941 16:25:12 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:03.941 16:25:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:03.941 16:25:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.941 16:25:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:14:03.941 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:14:03.941 16:25:12 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:03.941 16:25:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:03.941 16:25:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:14:03.941 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:14:03.941 16:25:12 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:03.941 16:25:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:03.941 16:25:12 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.941 16:25:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.941 16:25:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:03.941 16:25:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.941 16:25:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:03.941 Found net devices under 0000:18:00.0: mlx_0_0 00:14:03.941 16:25:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.941 16:25:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:03.941 16:25:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.941 16:25:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:03.941 16:25:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.941 16:25:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:03.941 Found net devices under 0000:18:00.1: mlx_0_1 00:14:03.941 16:25:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.941 16:25:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:03.941 16:25:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:03.941 16:25:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:03.941 16:25:12 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:03.941 16:25:12 -- nvmf/common.sh@58 -- # uname 00:14:03.941 16:25:12 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:03.941 16:25:12 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:03.941 16:25:12 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:03.941 16:25:12 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:03.941 16:25:12 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:03.941 16:25:12 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:03.941 16:25:12 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:03.941 16:25:12 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:03.941 16:25:12 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:03.941 16:25:12 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:03.941 16:25:12 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:03.941 16:25:12 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:03.941 16:25:12 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:03.941 16:25:12 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:03.941 16:25:12 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:03.941 16:25:12 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:03.941 16:25:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:03.941 16:25:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.941 16:25:12 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:03.941 16:25:12 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:03.941 16:25:12 -- nvmf/common.sh@105 -- # continue 2 00:14:03.942 16:25:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:03.942 16:25:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.942 16:25:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:03.942 16:25:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.942 16:25:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:03.942 16:25:12 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:03.942 16:25:12 -- nvmf/common.sh@105 -- # continue 2 00:14:03.942 16:25:12 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:03.942 16:25:12 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:03.942 16:25:12 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:03.942 16:25:12 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:03.942 16:25:12 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:03.942 16:25:12 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:03.942 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:03.942 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:14:03.942 altname enp24s0f0np0 00:14:03.942 altname ens785f0np0 00:14:03.942 inet 192.168.100.8/24 scope global mlx_0_0 00:14:03.942 valid_lft forever preferred_lft forever 00:14:03.942 16:25:12 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:03.942 16:25:12 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:03.942 16:25:12 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:03.942 16:25:12 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:03.942 16:25:12 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:03.942 16:25:12 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:03.942 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:03.942 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:14:03.942 altname enp24s0f1np1 00:14:03.942 altname ens785f1np1 00:14:03.942 inet 192.168.100.9/24 scope global mlx_0_1 00:14:03.942 valid_lft forever preferred_lft forever 00:14:03.942 16:25:12 -- nvmf/common.sh@411 -- # return 0 00:14:03.942 16:25:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:03.942 16:25:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:03.942 16:25:12 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:03.942 16:25:12 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:03.942 16:25:12 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:03.942 16:25:12 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:03.942 16:25:12 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:03.942 16:25:12 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:03.942 16:25:12 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:03.942 16:25:12 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:03.942 16:25:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:03.942 16:25:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.942 16:25:12 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:03.942 16:25:12 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:03.942 16:25:12 -- nvmf/common.sh@105 -- # continue 2 00:14:03.942 16:25:12 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:03.942 16:25:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.942 16:25:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:03.942 16:25:12 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:03.942 16:25:12 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:03.942 16:25:12 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:03.942 16:25:12 -- nvmf/common.sh@105 -- # continue 2 00:14:03.942 16:25:12 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:03.942 16:25:12 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:03.942 16:25:12 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:03.942 16:25:12 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:03.942 16:25:12 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:03.942 16:25:12 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:03.942 16:25:12 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:03.942 16:25:12 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:03.942 192.168.100.9' 00:14:03.942 16:25:12 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:03.942 192.168.100.9' 00:14:03.942 16:25:12 -- nvmf/common.sh@446 -- # head -n 1 00:14:03.942 16:25:12 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:03.942 16:25:12 -- nvmf/common.sh@447 -- # head -n 1 00:14:03.942 16:25:12 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:03.942 192.168.100.9' 00:14:03.942 16:25:12 -- nvmf/common.sh@447 -- # tail -n +2 00:14:03.942 16:25:12 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:03.942 16:25:12 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:03.942 16:25:12 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:03.942 16:25:12 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:03.942 16:25:12 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:03.942 16:25:12 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:03.942 16:25:12 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:03.942 16:25:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:03.942 16:25:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:03.942 16:25:12 -- common/autotest_common.sh@10 -- # set +x 00:14:03.942 16:25:12 -- nvmf/common.sh@470 -- # nvmfpid=447582 00:14:03.942 16:25:12 -- nvmf/common.sh@471 -- # waitforlisten 447582 00:14:03.942 16:25:12 -- common/autotest_common.sh@817 -- # '[' -z 447582 ']' 00:14:03.942 16:25:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.942 16:25:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:03.942 16:25:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.942 16:25:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:03.942 16:25:12 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:03.942 16:25:12 -- common/autotest_common.sh@10 -- # set +x 00:14:03.942 [2024-04-26 16:25:12.444543] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:14:03.942 [2024-04-26 16:25:12.444601] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:03.942 EAL: No free 2048 kB hugepages reported on node 1 00:14:03.942 [2024-04-26 16:25:12.519470] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.942 [2024-04-26 16:25:12.602364] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:03.942 [2024-04-26 16:25:12.602406] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:03.942 [2024-04-26 16:25:12.602420] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:03.942 [2024-04-26 16:25:12.602429] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:03.942 [2024-04-26 16:25:12.602436] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:03.942 [2024-04-26 16:25:12.602461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.522 16:25:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:04.522 16:25:13 -- common/autotest_common.sh@850 -- # return 0 00:14:04.522 16:25:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:04.522 16:25:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:04.522 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.522 16:25:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.522 16:25:13 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:04.522 16:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.522 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.522 [2024-04-26 16:25:13.310145] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a2c480/0x1a30970) succeed. 00:14:04.522 [2024-04-26 16:25:13.318622] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a2d980/0x1a72000) succeed. 00:14:04.522 16:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.522 16:25:13 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:04.522 16:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.522 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.522 Malloc0 00:14:04.522 16:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.522 16:25:13 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:04.522 16:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.522 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.522 16:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.522 16:25:13 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:04.522 16:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.522 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.522 16:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.522 16:25:13 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:04.522 16:25:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:04.522 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.522 [2024-04-26 16:25:13.412392] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:04.522 16:25:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:04.522 16:25:13 -- target/queue_depth.sh@30 -- # bdevperf_pid=447630 00:14:04.522 16:25:13 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.522 16:25:13 -- target/queue_depth.sh@33 -- # waitforlisten 447630 /var/tmp/bdevperf.sock 00:14:04.522 16:25:13 -- common/autotest_common.sh@817 -- # '[' -z 447630 ']' 00:14:04.522 16:25:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:04.522 16:25:13 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:04.522 16:25:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:04.522 16:25:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:04.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:04.522 16:25:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:04.522 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:14:04.522 [2024-04-26 16:25:13.458334] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:14:04.522 [2024-04-26 16:25:13.458395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid447630 ] 00:14:04.522 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.522 [2024-04-26 16:25:13.531074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.781 [2024-04-26 16:25:13.618225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.349 16:25:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:05.349 16:25:14 -- common/autotest_common.sh@850 -- # return 0 00:14:05.349 16:25:14 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:05.349 16:25:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:05.349 16:25:14 -- common/autotest_common.sh@10 -- # set +x 00:14:05.349 NVMe0n1 00:14:05.349 16:25:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:05.349 16:25:14 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:05.608 Running I/O for 10 seconds... 00:14:15.591 00:14:15.591 Latency(us) 00:14:15.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.591 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:15.591 Verification LBA range: start 0x0 length 0x4000 00:14:15.591 NVMe0n1 : 10.03 17870.40 69.81 0.00 0.00 57163.13 21541.40 41259.19 00:14:15.592 =================================================================================================================== 00:14:15.592 Total : 17870.40 69.81 0.00 0.00 57163.13 21541.40 41259.19 00:14:15.592 0 00:14:15.592 16:25:24 -- target/queue_depth.sh@39 -- # killprocess 447630 00:14:15.592 16:25:24 -- common/autotest_common.sh@936 -- # '[' -z 447630 ']' 00:14:15.592 16:25:24 -- common/autotest_common.sh@940 -- # kill -0 447630 00:14:15.592 16:25:24 -- common/autotest_common.sh@941 -- # uname 00:14:15.592 16:25:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.592 16:25:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 447630 00:14:15.592 16:25:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:15.592 16:25:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:15.592 16:25:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 447630' 00:14:15.592 killing process with pid 447630 00:14:15.592 16:25:24 -- common/autotest_common.sh@955 -- # kill 447630 00:14:15.592 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.592 00:14:15.592 Latency(us) 00:14:15.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.592 =================================================================================================================== 00:14:15.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.592 16:25:24 -- common/autotest_common.sh@960 -- # wait 447630 00:14:15.851 16:25:24 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:15.851 16:25:24 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:15.851 16:25:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:15.851 16:25:24 -- nvmf/common.sh@117 -- # sync 00:14:15.851 16:25:24 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:15.851 16:25:24 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:15.851 16:25:24 -- nvmf/common.sh@120 -- # set +e 00:14:15.851 16:25:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:15.851 16:25:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:15.851 rmmod nvme_rdma 00:14:15.851 rmmod nvme_fabrics 00:14:15.851 16:25:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:15.851 16:25:24 -- nvmf/common.sh@124 -- # set -e 00:14:15.851 16:25:24 -- nvmf/common.sh@125 -- # return 0 00:14:15.851 16:25:24 -- nvmf/common.sh@478 -- # '[' -n 447582 ']' 00:14:15.851 16:25:24 -- nvmf/common.sh@479 -- # killprocess 447582 00:14:15.851 16:25:24 -- common/autotest_common.sh@936 -- # '[' -z 447582 ']' 00:14:15.851 16:25:24 -- common/autotest_common.sh@940 -- # kill -0 447582 00:14:15.851 16:25:24 -- common/autotest_common.sh@941 -- # uname 00:14:15.851 16:25:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:15.851 16:25:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 447582 00:14:16.110 16:25:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:16.110 16:25:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:16.110 16:25:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 447582' 00:14:16.110 killing process with pid 447582 00:14:16.110 16:25:24 -- common/autotest_common.sh@955 -- # kill 447582 00:14:16.110 16:25:24 -- common/autotest_common.sh@960 -- # wait 447582 00:14:16.110 [2024-04-26 16:25:24.948367] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:16.371 16:25:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:16.371 16:25:25 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:16.371 00:14:16.371 real 0m18.971s 00:14:16.371 user 0m26.007s 00:14:16.371 sys 0m5.376s 00:14:16.371 16:25:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:16.371 16:25:25 -- common/autotest_common.sh@10 -- # set +x 00:14:16.371 ************************************ 00:14:16.371 END TEST nvmf_queue_depth 00:14:16.371 ************************************ 00:14:16.371 16:25:25 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:16.371 16:25:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:16.371 16:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:16.371 16:25:25 -- common/autotest_common.sh@10 -- # set +x 00:14:16.371 ************************************ 00:14:16.371 START TEST nvmf_multipath 00:14:16.371 ************************************ 00:14:16.371 16:25:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:14:16.629 * Looking for test storage... 00:14:16.629 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:16.629 16:25:25 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:16.629 16:25:25 -- nvmf/common.sh@7 -- # uname -s 00:14:16.629 16:25:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:16.629 16:25:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:16.629 16:25:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:16.629 16:25:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:16.629 16:25:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:16.629 16:25:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:16.629 16:25:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:16.629 16:25:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:16.629 16:25:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:16.629 16:25:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:16.629 16:25:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:16.629 16:25:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:14:16.629 16:25:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:16.629 16:25:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:16.629 16:25:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:16.629 16:25:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:16.630 16:25:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:16.630 16:25:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:16.630 16:25:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:16.630 16:25:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:16.630 16:25:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.630 16:25:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.630 16:25:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.630 16:25:25 -- paths/export.sh@5 -- # export PATH 00:14:16.630 16:25:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:16.630 16:25:25 -- nvmf/common.sh@47 -- # : 0 00:14:16.630 16:25:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:16.630 16:25:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:16.630 16:25:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:16.630 16:25:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:16.630 16:25:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:16.630 16:25:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:16.630 16:25:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:16.630 16:25:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:16.630 16:25:25 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:16.630 16:25:25 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:16.630 16:25:25 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:16.630 16:25:25 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:16.630 16:25:25 -- target/multipath.sh@43 -- # nvmftestinit 00:14:16.630 16:25:25 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:16.630 16:25:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:16.630 16:25:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:16.630 16:25:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:16.630 16:25:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:16.630 16:25:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.630 16:25:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.630 16:25:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.630 16:25:25 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:16.630 16:25:25 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:16.630 16:25:25 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:16.630 16:25:25 -- common/autotest_common.sh@10 -- # set +x 00:14:21.899 16:25:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:21.899 16:25:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.899 16:25:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.899 16:25:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.899 16:25:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.899 16:25:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.899 16:25:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.899 16:25:30 -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.899 16:25:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.899 16:25:30 -- nvmf/common.sh@296 -- # e810=() 00:14:21.899 16:25:30 -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.899 16:25:30 -- nvmf/common.sh@297 -- # x722=() 00:14:21.899 16:25:30 -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.899 16:25:30 -- nvmf/common.sh@298 -- # mlx=() 00:14:21.899 16:25:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.899 16:25:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.899 16:25:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.900 16:25:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.900 16:25:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.900 16:25:30 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:21.900 16:25:30 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:21.900 16:25:30 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:21.900 16:25:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.900 16:25:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:14:21.900 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:14:21.900 16:25:30 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.900 16:25:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:14:21.900 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:14:21.900 16:25:30 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:21.900 16:25:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.900 16:25:30 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.900 16:25:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:21.900 16:25:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.900 16:25:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:21.900 Found net devices under 0000:18:00.0: mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.900 16:25:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.900 16:25:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:21.900 16:25:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.900 16:25:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:21.900 Found net devices under 0000:18:00.1: mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.900 16:25:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:21.900 16:25:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:21.900 16:25:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:21.900 16:25:30 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:21.900 16:25:30 -- nvmf/common.sh@58 -- # uname 00:14:21.900 16:25:30 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:21.900 16:25:30 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:21.900 16:25:30 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:21.900 16:25:30 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:21.900 16:25:30 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:21.900 16:25:30 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:21.900 16:25:30 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:21.900 16:25:30 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:21.900 16:25:30 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:21.900 16:25:30 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:21.900 16:25:30 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:21.900 16:25:30 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.900 16:25:30 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:21.900 16:25:30 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:21.900 16:25:30 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.900 16:25:30 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:21.900 16:25:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@105 -- # continue 2 00:14:21.900 16:25:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@105 -- # continue 2 00:14:21.900 16:25:30 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:21.900 16:25:30 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.900 16:25:30 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:21.900 16:25:30 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:21.900 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.900 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:14:21.900 altname enp24s0f0np0 00:14:21.900 altname ens785f0np0 00:14:21.900 inet 192.168.100.8/24 scope global mlx_0_0 00:14:21.900 valid_lft forever preferred_lft forever 00:14:21.900 16:25:30 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:21.900 16:25:30 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.900 16:25:30 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:21.900 16:25:30 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:21.900 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:21.900 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:14:21.900 altname enp24s0f1np1 00:14:21.900 altname ens785f1np1 00:14:21.900 inet 192.168.100.9/24 scope global mlx_0_1 00:14:21.900 valid_lft forever preferred_lft forever 00:14:21.900 16:25:30 -- nvmf/common.sh@411 -- # return 0 00:14:21.900 16:25:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:21.900 16:25:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:21.900 16:25:30 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:21.900 16:25:30 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:21.900 16:25:30 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:21.900 16:25:30 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:21.900 16:25:30 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:21.900 16:25:30 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:21.900 16:25:30 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:21.900 16:25:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@105 -- # continue 2 00:14:21.900 16:25:30 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:21.900 16:25:30 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:21.900 16:25:30 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@105 -- # continue 2 00:14:21.900 16:25:30 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:21.900 16:25:30 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.900 16:25:30 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:21.900 16:25:30 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:21.900 16:25:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:21.900 16:25:30 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:21.900 192.168.100.9' 00:14:21.900 16:25:30 -- nvmf/common.sh@446 -- # head -n 1 00:14:21.900 16:25:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:21.900 192.168.100.9' 00:14:21.900 16:25:30 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:21.900 16:25:30 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:21.900 192.168.100.9' 00:14:21.900 16:25:30 -- nvmf/common.sh@447 -- # tail -n +2 00:14:21.900 16:25:30 -- nvmf/common.sh@447 -- # head -n 1 00:14:21.900 16:25:30 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:21.900 16:25:30 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:21.900 16:25:30 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:21.900 16:25:30 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:21.900 16:25:30 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:21.900 16:25:30 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:21.900 16:25:30 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:14:21.901 16:25:30 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:14:21.901 16:25:30 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:14:21.901 run this test only with TCP transport for now 00:14:21.901 16:25:30 -- target/multipath.sh@53 -- # nvmftestfini 00:14:21.901 16:25:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:21.901 16:25:30 -- nvmf/common.sh@117 -- # sync 00:14:21.901 16:25:30 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:21.901 16:25:30 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:21.901 16:25:30 -- nvmf/common.sh@120 -- # set +e 00:14:21.901 16:25:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.901 16:25:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:21.901 rmmod nvme_rdma 00:14:21.901 rmmod nvme_fabrics 00:14:21.901 16:25:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.901 16:25:30 -- nvmf/common.sh@124 -- # set -e 00:14:21.901 16:25:30 -- nvmf/common.sh@125 -- # return 0 00:14:21.901 16:25:30 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:14:21.901 16:25:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:21.901 16:25:30 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:21.901 16:25:30 -- target/multipath.sh@54 -- # exit 0 00:14:21.901 16:25:30 -- target/multipath.sh@1 -- # nvmftestfini 00:14:21.901 16:25:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:21.901 16:25:30 -- nvmf/common.sh@117 -- # sync 00:14:21.901 16:25:30 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:21.901 16:25:30 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:21.901 16:25:30 -- nvmf/common.sh@120 -- # set +e 00:14:21.901 16:25:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:21.901 16:25:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:21.901 16:25:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:21.901 16:25:30 -- nvmf/common.sh@124 -- # set -e 00:14:21.901 16:25:30 -- nvmf/common.sh@125 -- # return 0 00:14:21.901 16:25:30 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:14:21.901 16:25:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:21.901 16:25:30 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:21.901 00:14:21.901 real 0m5.526s 00:14:21.901 user 0m1.467s 00:14:21.901 sys 0m4.119s 00:14:21.901 16:25:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:21.901 16:25:30 -- common/autotest_common.sh@10 -- # set +x 00:14:21.901 ************************************ 00:14:21.901 END TEST nvmf_multipath 00:14:21.901 ************************************ 00:14:22.160 16:25:30 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:22.160 16:25:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:22.160 16:25:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:22.160 16:25:30 -- common/autotest_common.sh@10 -- # set +x 00:14:22.160 ************************************ 00:14:22.160 START TEST nvmf_zcopy 00:14:22.160 ************************************ 00:14:22.160 16:25:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:14:22.160 * Looking for test storage... 00:14:22.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:22.160 16:25:31 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.160 16:25:31 -- nvmf/common.sh@7 -- # uname -s 00:14:22.160 16:25:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.160 16:25:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.160 16:25:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.160 16:25:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.160 16:25:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.160 16:25:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.160 16:25:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.160 16:25:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.160 16:25:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.160 16:25:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.160 16:25:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:22.160 16:25:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:14:22.160 16:25:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.160 16:25:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.160 16:25:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.160 16:25:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.160 16:25:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:22.160 16:25:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.160 16:25:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.160 16:25:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.160 16:25:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.160 16:25:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.160 16:25:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.160 16:25:31 -- paths/export.sh@5 -- # export PATH 00:14:22.160 16:25:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.160 16:25:31 -- nvmf/common.sh@47 -- # : 0 00:14:22.160 16:25:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.160 16:25:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.160 16:25:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.160 16:25:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.160 16:25:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.160 16:25:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.160 16:25:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.160 16:25:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.160 16:25:31 -- target/zcopy.sh@12 -- # nvmftestinit 00:14:22.160 16:25:31 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:22.160 16:25:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.160 16:25:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:22.160 16:25:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:22.160 16:25:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:22.160 16:25:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.160 16:25:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.160 16:25:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.160 16:25:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:22.160 16:25:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:22.160 16:25:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.160 16:25:31 -- common/autotest_common.sh@10 -- # set +x 00:14:28.729 16:25:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:28.729 16:25:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.729 16:25:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.729 16:25:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.729 16:25:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.729 16:25:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.729 16:25:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.729 16:25:36 -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.729 16:25:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.729 16:25:36 -- nvmf/common.sh@296 -- # e810=() 00:14:28.729 16:25:36 -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.729 16:25:36 -- nvmf/common.sh@297 -- # x722=() 00:14:28.729 16:25:36 -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.729 16:25:36 -- nvmf/common.sh@298 -- # mlx=() 00:14:28.729 16:25:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.729 16:25:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.729 16:25:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.729 16:25:36 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:28.729 16:25:36 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:28.729 16:25:36 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:28.729 16:25:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.729 16:25:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:14:28.729 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:14:28.729 16:25:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:28.729 16:25:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:14:28.729 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:14:28.729 16:25:36 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:28.729 16:25:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.729 16:25:36 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.729 16:25:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:28.729 16:25:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.729 16:25:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:28.729 Found net devices under 0000:18:00.0: mlx_0_0 00:14:28.729 16:25:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.729 16:25:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.729 16:25:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:28.729 16:25:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.729 16:25:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:28.729 Found net devices under 0000:18:00.1: mlx_0_1 00:14:28.729 16:25:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.729 16:25:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:28.729 16:25:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:28.729 16:25:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:28.729 16:25:36 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:28.729 16:25:36 -- nvmf/common.sh@58 -- # uname 00:14:28.729 16:25:36 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:28.729 16:25:36 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:28.729 16:25:36 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:28.729 16:25:36 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:28.729 16:25:36 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:28.729 16:25:36 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:28.729 16:25:36 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:28.729 16:25:36 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:28.729 16:25:36 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:28.729 16:25:36 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:28.729 16:25:36 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:28.729 16:25:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:28.729 16:25:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:28.729 16:25:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:28.729 16:25:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:28.729 16:25:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:28.729 16:25:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:28.729 16:25:36 -- nvmf/common.sh@105 -- # continue 2 00:14:28.729 16:25:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.729 16:25:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:28.729 16:25:36 -- nvmf/common.sh@105 -- # continue 2 00:14:28.729 16:25:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:28.729 16:25:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:28.729 16:25:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:28.729 16:25:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:28.729 16:25:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:28.729 16:25:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:28.729 16:25:36 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:28.729 16:25:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:28.729 16:25:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:28.729 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:28.729 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:14:28.729 altname enp24s0f0np0 00:14:28.729 altname ens785f0np0 00:14:28.729 inet 192.168.100.8/24 scope global mlx_0_0 00:14:28.729 valid_lft forever preferred_lft forever 00:14:28.729 16:25:36 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:28.729 16:25:36 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:28.729 16:25:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:28.730 16:25:36 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:28.730 16:25:36 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:28.730 16:25:36 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:28.730 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:28.730 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:14:28.730 altname enp24s0f1np1 00:14:28.730 altname ens785f1np1 00:14:28.730 inet 192.168.100.9/24 scope global mlx_0_1 00:14:28.730 valid_lft forever preferred_lft forever 00:14:28.730 16:25:36 -- nvmf/common.sh@411 -- # return 0 00:14:28.730 16:25:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:28.730 16:25:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:28.730 16:25:36 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:28.730 16:25:36 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:28.730 16:25:36 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:28.730 16:25:36 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:28.730 16:25:36 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:28.730 16:25:36 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:28.730 16:25:36 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:28.730 16:25:36 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:28.730 16:25:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:28.730 16:25:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.730 16:25:36 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:28.730 16:25:36 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:28.730 16:25:36 -- nvmf/common.sh@105 -- # continue 2 00:14:28.730 16:25:36 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:28.730 16:25:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.730 16:25:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:28.730 16:25:36 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:28.730 16:25:36 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:28.730 16:25:36 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:28.730 16:25:36 -- nvmf/common.sh@105 -- # continue 2 00:14:28.730 16:25:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:28.730 16:25:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:28.730 16:25:36 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:28.730 16:25:36 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:28.730 16:25:36 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:28.730 16:25:36 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:28.730 16:25:36 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:28.730 16:25:36 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:28.730 192.168.100.9' 00:14:28.730 16:25:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:28.730 192.168.100.9' 00:14:28.730 16:25:36 -- nvmf/common.sh@446 -- # head -n 1 00:14:28.730 16:25:36 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:28.730 16:25:36 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:28.730 192.168.100.9' 00:14:28.730 16:25:36 -- nvmf/common.sh@447 -- # head -n 1 00:14:28.730 16:25:36 -- nvmf/common.sh@447 -- # tail -n +2 00:14:28.730 16:25:36 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:28.730 16:25:36 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:28.730 16:25:36 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:28.730 16:25:36 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:28.730 16:25:36 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:28.730 16:25:36 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:28.730 16:25:36 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:28.730 16:25:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:28.730 16:25:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:28.730 16:25:36 -- common/autotest_common.sh@10 -- # set +x 00:14:28.730 16:25:36 -- nvmf/common.sh@470 -- # nvmfpid=454759 00:14:28.730 16:25:36 -- nvmf/common.sh@471 -- # waitforlisten 454759 00:14:28.730 16:25:36 -- common/autotest_common.sh@817 -- # '[' -z 454759 ']' 00:14:28.730 16:25:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.730 16:25:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:28.730 16:25:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.730 16:25:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:28.730 16:25:36 -- common/autotest_common.sh@10 -- # set +x 00:14:28.730 16:25:36 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:28.730 [2024-04-26 16:25:36.893654] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:14:28.730 [2024-04-26 16:25:36.893711] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.730 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.730 [2024-04-26 16:25:36.966366] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.730 [2024-04-26 16:25:37.045989] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.730 [2024-04-26 16:25:37.046027] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.730 [2024-04-26 16:25:37.046037] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.730 [2024-04-26 16:25:37.046045] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.730 [2024-04-26 16:25:37.046052] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.730 [2024-04-26 16:25:37.046074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.730 16:25:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:28.730 16:25:37 -- common/autotest_common.sh@850 -- # return 0 00:14:28.730 16:25:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:28.730 16:25:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:28.730 16:25:37 -- common/autotest_common.sh@10 -- # set +x 00:14:28.730 16:25:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.730 16:25:37 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:14:28.730 16:25:37 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:14:28.730 Unsupported transport: rdma 00:14:28.730 16:25:37 -- target/zcopy.sh@17 -- # exit 0 00:14:28.730 16:25:37 -- target/zcopy.sh@1 -- # process_shm --id 0 00:14:28.730 16:25:37 -- common/autotest_common.sh@794 -- # type=--id 00:14:28.730 16:25:37 -- common/autotest_common.sh@795 -- # id=0 00:14:28.730 16:25:37 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:14:28.730 16:25:37 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:28.730 16:25:37 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:14:28.730 16:25:37 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:14:28.730 16:25:37 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:14:28.730 16:25:37 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:28.990 nvmf_trace.0 00:14:28.990 16:25:37 -- common/autotest_common.sh@809 -- # return 0 00:14:28.990 16:25:37 -- target/zcopy.sh@1 -- # nvmftestfini 00:14:28.990 16:25:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:28.990 16:25:37 -- nvmf/common.sh@117 -- # sync 00:14:28.990 16:25:37 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:28.990 16:25:37 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:28.990 16:25:37 -- nvmf/common.sh@120 -- # set +e 00:14:28.990 16:25:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:28.990 16:25:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:28.990 rmmod nvme_rdma 00:14:28.990 rmmod nvme_fabrics 00:14:28.990 16:25:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:28.990 16:25:37 -- nvmf/common.sh@124 -- # set -e 00:14:28.990 16:25:37 -- nvmf/common.sh@125 -- # return 0 00:14:28.990 16:25:37 -- nvmf/common.sh@478 -- # '[' -n 454759 ']' 00:14:28.990 16:25:37 -- nvmf/common.sh@479 -- # killprocess 454759 00:14:28.990 16:25:37 -- common/autotest_common.sh@936 -- # '[' -z 454759 ']' 00:14:28.990 16:25:37 -- common/autotest_common.sh@940 -- # kill -0 454759 00:14:28.990 16:25:37 -- common/autotest_common.sh@941 -- # uname 00:14:28.990 16:25:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:28.990 16:25:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 454759 00:14:28.990 16:25:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:28.990 16:25:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:28.990 16:25:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 454759' 00:14:28.990 killing process with pid 454759 00:14:28.990 16:25:37 -- common/autotest_common.sh@955 -- # kill 454759 00:14:28.990 16:25:37 -- common/autotest_common.sh@960 -- # wait 454759 00:14:29.249 16:25:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:29.249 16:25:38 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:29.249 00:14:29.249 real 0m7.055s 00:14:29.249 user 0m3.036s 00:14:29.249 sys 0m4.668s 00:14:29.249 16:25:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:29.249 16:25:38 -- common/autotest_common.sh@10 -- # set +x 00:14:29.249 ************************************ 00:14:29.249 END TEST nvmf_zcopy 00:14:29.249 ************************************ 00:14:29.249 16:25:38 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:14:29.249 16:25:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:29.249 16:25:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.249 16:25:38 -- common/autotest_common.sh@10 -- # set +x 00:14:29.509 ************************************ 00:14:29.509 START TEST nvmf_nmic 00:14:29.509 ************************************ 00:14:29.509 16:25:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:14:29.509 * Looking for test storage... 00:14:29.509 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:29.509 16:25:38 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:29.509 16:25:38 -- nvmf/common.sh@7 -- # uname -s 00:14:29.509 16:25:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.509 16:25:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.509 16:25:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.509 16:25:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.509 16:25:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.509 16:25:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.509 16:25:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.509 16:25:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.509 16:25:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.509 16:25:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.509 16:25:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:29.509 16:25:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:14:29.509 16:25:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.509 16:25:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.509 16:25:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:29.509 16:25:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.509 16:25:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:29.509 16:25:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.509 16:25:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.509 16:25:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.509 16:25:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.509 16:25:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.509 16:25:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.509 16:25:38 -- paths/export.sh@5 -- # export PATH 00:14:29.509 16:25:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.509 16:25:38 -- nvmf/common.sh@47 -- # : 0 00:14:29.509 16:25:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:29.509 16:25:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:29.509 16:25:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.509 16:25:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.509 16:25:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.509 16:25:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:29.509 16:25:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:29.509 16:25:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:29.509 16:25:38 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:29.509 16:25:38 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:29.509 16:25:38 -- target/nmic.sh@14 -- # nvmftestinit 00:14:29.509 16:25:38 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:29.509 16:25:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.509 16:25:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:29.509 16:25:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:29.509 16:25:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:29.509 16:25:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.509 16:25:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:29.510 16:25:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.510 16:25:38 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:29.510 16:25:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:29.510 16:25:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:29.510 16:25:38 -- common/autotest_common.sh@10 -- # set +x 00:14:36.083 16:25:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:36.083 16:25:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.083 16:25:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.083 16:25:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.083 16:25:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.083 16:25:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.083 16:25:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.083 16:25:44 -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.083 16:25:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.083 16:25:44 -- nvmf/common.sh@296 -- # e810=() 00:14:36.083 16:25:44 -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.083 16:25:44 -- nvmf/common.sh@297 -- # x722=() 00:14:36.083 16:25:44 -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.083 16:25:44 -- nvmf/common.sh@298 -- # mlx=() 00:14:36.083 16:25:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.083 16:25:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.083 16:25:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.083 16:25:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:36.083 16:25:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:36.083 16:25:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:36.083 16:25:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.083 16:25:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:14:36.083 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:14:36.083 16:25:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:36.083 16:25:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:14:36.083 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:14:36.083 16:25:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:36.083 16:25:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.083 16:25:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.083 16:25:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:36.083 16:25:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.083 16:25:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:36.083 Found net devices under 0000:18:00.0: mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.083 16:25:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.083 16:25:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:36.083 16:25:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.083 16:25:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:36.083 Found net devices under 0000:18:00.1: mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.083 16:25:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:36.083 16:25:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:36.083 16:25:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:36.083 16:25:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:36.083 16:25:44 -- nvmf/common.sh@58 -- # uname 00:14:36.083 16:25:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:36.083 16:25:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:36.083 16:25:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:36.083 16:25:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:36.083 16:25:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:36.083 16:25:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:36.083 16:25:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:36.083 16:25:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:36.083 16:25:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:36.083 16:25:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:36.083 16:25:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:36.083 16:25:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:36.083 16:25:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:36.083 16:25:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:36.083 16:25:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:36.083 16:25:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:36.083 16:25:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@105 -- # continue 2 00:14:36.083 16:25:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@105 -- # continue 2 00:14:36.083 16:25:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:36.083 16:25:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:36.083 16:25:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:36.083 16:25:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:36.083 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:36.083 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:14:36.083 altname enp24s0f0np0 00:14:36.083 altname ens785f0np0 00:14:36.083 inet 192.168.100.8/24 scope global mlx_0_0 00:14:36.083 valid_lft forever preferred_lft forever 00:14:36.083 16:25:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:36.083 16:25:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:36.083 16:25:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:36.083 16:25:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:36.083 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:36.083 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:14:36.083 altname enp24s0f1np1 00:14:36.083 altname ens785f1np1 00:14:36.083 inet 192.168.100.9/24 scope global mlx_0_1 00:14:36.083 valid_lft forever preferred_lft forever 00:14:36.083 16:25:44 -- nvmf/common.sh@411 -- # return 0 00:14:36.083 16:25:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:36.083 16:25:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:36.083 16:25:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:36.083 16:25:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:36.083 16:25:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:36.083 16:25:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:36.083 16:25:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:36.083 16:25:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:36.083 16:25:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:36.083 16:25:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@105 -- # continue 2 00:14:36.083 16:25:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:36.083 16:25:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:36.083 16:25:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@105 -- # continue 2 00:14:36.083 16:25:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:36.083 16:25:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:36.083 16:25:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:36.083 16:25:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:36.083 16:25:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:36.083 16:25:44 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:36.083 192.168.100.9' 00:14:36.083 16:25:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:36.083 192.168.100.9' 00:14:36.083 16:25:44 -- nvmf/common.sh@446 -- # head -n 1 00:14:36.083 16:25:45 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:36.083 16:25:45 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:36.083 192.168.100.9' 00:14:36.083 16:25:45 -- nvmf/common.sh@447 -- # tail -n +2 00:14:36.083 16:25:45 -- nvmf/common.sh@447 -- # head -n 1 00:14:36.083 16:25:45 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:36.083 16:25:45 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:36.083 16:25:45 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:36.083 16:25:45 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:36.083 16:25:45 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:36.083 16:25:45 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:36.083 16:25:45 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:36.083 16:25:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:36.083 16:25:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:36.084 16:25:45 -- common/autotest_common.sh@10 -- # set +x 00:14:36.084 16:25:45 -- nvmf/common.sh@470 -- # nvmfpid=457916 00:14:36.084 16:25:45 -- nvmf/common.sh@471 -- # waitforlisten 457916 00:14:36.084 16:25:45 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:36.084 16:25:45 -- common/autotest_common.sh@817 -- # '[' -z 457916 ']' 00:14:36.084 16:25:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.084 16:25:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.084 16:25:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.084 16:25:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.084 16:25:45 -- common/autotest_common.sh@10 -- # set +x 00:14:36.084 [2024-04-26 16:25:45.090946] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:14:36.084 [2024-04-26 16:25:45.090999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.342 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.342 [2024-04-26 16:25:45.166016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:36.342 [2024-04-26 16:25:45.251756] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.342 [2024-04-26 16:25:45.251802] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.342 [2024-04-26 16:25:45.251811] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.342 [2024-04-26 16:25:45.251836] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.342 [2024-04-26 16:25:45.251843] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.342 [2024-04-26 16:25:45.251901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.342 [2024-04-26 16:25:45.251986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.342 [2024-04-26 16:25:45.252064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.342 [2024-04-26 16:25:45.252066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.909 16:25:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:36.909 16:25:45 -- common/autotest_common.sh@850 -- # return 0 00:14:36.909 16:25:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:36.909 16:25:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:36.909 16:25:45 -- common/autotest_common.sh@10 -- # set +x 00:14:37.179 16:25:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.179 16:25:45 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:37.179 16:25:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.179 16:25:45 -- common/autotest_common.sh@10 -- # set +x 00:14:37.179 [2024-04-26 16:25:45.982851] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1818310/0x181c800) succeed. 00:14:37.179 [2024-04-26 16:25:45.993231] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1819950/0x185de90) succeed. 00:14:37.179 16:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.179 16:25:46 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:37.179 16:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.179 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.179 Malloc0 00:14:37.179 16:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.179 16:25:46 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:37.179 16:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.179 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.179 16:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.179 16:25:46 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:37.179 16:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.179 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.179 16:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.179 16:25:46 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:37.179 16:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.179 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.179 [2024-04-26 16:25:46.165300] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:37.179 16:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.179 16:25:46 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:37.180 test case1: single bdev can't be used in multiple subsystems 00:14:37.180 16:25:46 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:37.180 16:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.180 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.180 16:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.180 16:25:46 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:14:37.180 16:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.180 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.180 16:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.180 16:25:46 -- target/nmic.sh@28 -- # nmic_status=0 00:14:37.180 16:25:46 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:37.180 16:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.180 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.180 [2024-04-26 16:25:46.189180] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:37.180 [2024-04-26 16:25:46.189200] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:37.180 [2024-04-26 16:25:46.189209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.180 request: 00:14:37.180 { 00:14:37.180 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:37.180 "namespace": { 00:14:37.180 "bdev_name": "Malloc0", 00:14:37.180 "no_auto_visible": false 00:14:37.180 }, 00:14:37.180 "method": "nvmf_subsystem_add_ns", 00:14:37.180 "req_id": 1 00:14:37.180 } 00:14:37.180 Got JSON-RPC error response 00:14:37.180 response: 00:14:37.180 { 00:14:37.180 "code": -32602, 00:14:37.180 "message": "Invalid parameters" 00:14:37.180 } 00:14:37.180 16:25:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:14:37.180 16:25:46 -- target/nmic.sh@29 -- # nmic_status=1 00:14:37.180 16:25:46 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:37.180 16:25:46 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:37.180 Adding namespace failed - expected result. 00:14:37.180 16:25:46 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:37.180 test case2: host connect to nvmf target in multiple paths 00:14:37.180 16:25:46 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:14:37.180 16:25:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:37.180 16:25:46 -- common/autotest_common.sh@10 -- # set +x 00:14:37.439 [2024-04-26 16:25:46.205263] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:14:37.439 16:25:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:37.439 16:25:46 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:38.814 16:25:47 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:14:40.718 16:25:49 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:40.718 16:25:49 -- common/autotest_common.sh@1184 -- # local i=0 00:14:40.718 16:25:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.718 16:25:49 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:40.718 16:25:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:42.757 16:25:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:42.757 16:25:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:42.757 16:25:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.757 16:25:51 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:42.757 16:25:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.757 16:25:51 -- common/autotest_common.sh@1194 -- # return 0 00:14:42.757 16:25:51 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:42.757 [global] 00:14:42.757 thread=1 00:14:42.757 invalidate=1 00:14:42.757 rw=write 00:14:42.757 time_based=1 00:14:42.757 runtime=1 00:14:42.757 ioengine=libaio 00:14:42.757 direct=1 00:14:42.757 bs=4096 00:14:42.757 iodepth=1 00:14:42.757 norandommap=0 00:14:42.758 numjobs=1 00:14:42.758 00:14:42.758 verify_dump=1 00:14:42.758 verify_backlog=512 00:14:42.758 verify_state_save=0 00:14:42.758 do_verify=1 00:14:42.758 verify=crc32c-intel 00:14:42.758 [job0] 00:14:42.758 filename=/dev/nvme0n1 00:14:42.758 Could not set queue depth (nvme0n1) 00:14:42.758 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:42.758 fio-3.35 00:14:42.758 Starting 1 thread 00:14:44.285 00:14:44.285 job0: (groupid=0, jobs=1): err= 0: pid=458943: Fri Apr 26 16:25:52 2024 00:14:44.285 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:14:44.285 slat (nsec): min=8146, max=37233, avg=8880.36, stdev=1361.57 00:14:44.285 clat (usec): min=39, max=330, avg=64.07, stdev=10.97 00:14:44.285 lat (usec): min=59, max=339, avg=72.95, stdev=11.14 00:14:44.285 clat percentiles (usec): 00:14:44.285 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 60], 00:14:44.285 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 65], 00:14:44.285 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 70], 95.00th=[ 72], 00:14:44.285 | 99.00th=[ 84], 99.50th=[ 145], 99.90th=[ 217], 99.95th=[ 235], 00:14:44.285 | 99.99th=[ 330] 00:14:44.285 write: IOPS=6750, BW=26.4MiB/s (27.6MB/s)(26.4MiB/1001msec); 0 zone resets 00:14:44.286 slat (nsec): min=8870, max=53868, avg=11110.83, stdev=1405.83 00:14:44.286 clat (usec): min=45, max=310, avg=61.10, stdev=10.26 00:14:44.286 lat (usec): min=58, max=321, avg=72.21, stdev=10.40 00:14:44.286 clat percentiles (usec): 00:14:44.286 | 1.00th=[ 52], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 58], 00:14:44.286 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:14:44.286 | 70.00th=[ 63], 80.00th=[ 64], 90.00th=[ 67], 95.00th=[ 69], 00:14:44.286 | 99.00th=[ 77], 99.50th=[ 116], 99.90th=[ 221], 99.95th=[ 247], 00:14:44.286 | 99.99th=[ 310] 00:14:44.286 bw ( KiB/s): min=28544, max=28544, per=100.00%, avg=28544.00, stdev= 0.00, samples=1 00:14:44.286 iops : min= 7136, max= 7136, avg=7136.00, stdev= 0.00, samples=1 00:14:44.286 lat (usec) : 50=0.19%, 100=99.14%, 250=0.64%, 500=0.03% 00:14:44.286 cpu : usr=7.30%, sys=14.30%, ctx=13413, majf=0, minf=2 00:14:44.286 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:44.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:44.286 issued rwts: total=6656,6757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:44.286 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:44.286 00:14:44.286 Run status group 0 (all jobs): 00:14:44.286 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:14:44.286 WRITE: bw=26.4MiB/s (27.6MB/s), 26.4MiB/s-26.4MiB/s (27.6MB/s-27.6MB/s), io=26.4MiB (27.7MB), run=1001-1001msec 00:14:44.286 00:14:44.286 Disk stats (read/write): 00:14:44.286 nvme0n1: ios=5951/6144, merge=0/0, ticks=353/353, in_queue=706, util=90.78% 00:14:44.286 16:25:52 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:50.852 16:25:59 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.852 16:25:59 -- common/autotest_common.sh@1205 -- # local i=0 00:14:50.852 16:25:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:50.852 16:25:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.852 16:25:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.852 16:25:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:50.852 16:25:59 -- common/autotest_common.sh@1217 -- # return 0 00:14:50.852 16:25:59 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:50.852 16:25:59 -- target/nmic.sh@53 -- # nvmftestfini 00:14:50.852 16:25:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:50.852 16:25:59 -- nvmf/common.sh@117 -- # sync 00:14:50.852 16:25:59 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:50.852 16:25:59 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:50.852 16:25:59 -- nvmf/common.sh@120 -- # set +e 00:14:50.852 16:25:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:50.852 16:25:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:50.852 rmmod nvme_rdma 00:14:50.852 rmmod nvme_fabrics 00:14:50.852 16:25:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:50.852 16:25:59 -- nvmf/common.sh@124 -- # set -e 00:14:50.852 16:25:59 -- nvmf/common.sh@125 -- # return 0 00:14:50.852 16:25:59 -- nvmf/common.sh@478 -- # '[' -n 457916 ']' 00:14:50.852 16:25:59 -- nvmf/common.sh@479 -- # killprocess 457916 00:14:50.852 16:25:59 -- common/autotest_common.sh@936 -- # '[' -z 457916 ']' 00:14:50.852 16:25:59 -- common/autotest_common.sh@940 -- # kill -0 457916 00:14:50.852 16:25:59 -- common/autotest_common.sh@941 -- # uname 00:14:50.852 16:25:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:50.852 16:25:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 457916 00:14:50.852 16:25:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:50.852 16:25:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:50.852 16:25:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 457916' 00:14:50.852 killing process with pid 457916 00:14:50.852 16:25:59 -- common/autotest_common.sh@955 -- # kill 457916 00:14:50.852 16:25:59 -- common/autotest_common.sh@960 -- # wait 457916 00:14:50.852 [2024-04-26 16:25:59.485931] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:14:50.852 16:25:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:50.852 16:25:59 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:14:50.852 00:14:50.852 real 0m21.413s 00:14:50.852 user 1m2.239s 00:14:50.852 sys 0m6.199s 00:14:50.852 16:25:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:50.852 16:25:59 -- common/autotest_common.sh@10 -- # set +x 00:14:50.852 ************************************ 00:14:50.852 END TEST nvmf_nmic 00:14:50.852 ************************************ 00:14:50.852 16:25:59 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:50.852 16:25:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:50.852 16:25:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.852 16:25:59 -- common/autotest_common.sh@10 -- # set +x 00:14:51.111 ************************************ 00:14:51.111 START TEST nvmf_fio_target 00:14:51.111 ************************************ 00:14:51.111 16:25:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:51.111 * Looking for test storage... 00:14:51.111 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:51.111 16:25:59 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.111 16:25:59 -- nvmf/common.sh@7 -- # uname -s 00:14:51.111 16:26:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.111 16:26:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.111 16:26:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.111 16:26:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.111 16:26:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.111 16:26:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.111 16:26:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.111 16:26:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.111 16:26:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.111 16:26:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.111 16:26:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:51.111 16:26:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:14:51.111 16:26:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.111 16:26:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.111 16:26:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.111 16:26:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.111 16:26:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:51.111 16:26:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.111 16:26:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.111 16:26:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.111 16:26:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.111 16:26:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.111 16:26:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.111 16:26:00 -- paths/export.sh@5 -- # export PATH 00:14:51.111 16:26:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.111 16:26:00 -- nvmf/common.sh@47 -- # : 0 00:14:51.111 16:26:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.111 16:26:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.111 16:26:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.111 16:26:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.111 16:26:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.111 16:26:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.111 16:26:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.111 16:26:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.111 16:26:00 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:51.111 16:26:00 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:51.111 16:26:00 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:51.111 16:26:00 -- target/fio.sh@16 -- # nvmftestinit 00:14:51.111 16:26:00 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:14:51.111 16:26:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.111 16:26:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:51.111 16:26:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:51.111 16:26:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:51.111 16:26:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.111 16:26:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:51.111 16:26:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.111 16:26:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:51.111 16:26:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:51.111 16:26:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.111 16:26:00 -- common/autotest_common.sh@10 -- # set +x 00:14:57.691 16:26:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:57.691 16:26:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:57.691 16:26:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:57.691 16:26:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:57.691 16:26:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:57.691 16:26:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:57.691 16:26:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:57.691 16:26:05 -- nvmf/common.sh@295 -- # net_devs=() 00:14:57.691 16:26:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:57.691 16:26:05 -- nvmf/common.sh@296 -- # e810=() 00:14:57.691 16:26:05 -- nvmf/common.sh@296 -- # local -ga e810 00:14:57.691 16:26:05 -- nvmf/common.sh@297 -- # x722=() 00:14:57.691 16:26:05 -- nvmf/common.sh@297 -- # local -ga x722 00:14:57.691 16:26:05 -- nvmf/common.sh@298 -- # mlx=() 00:14:57.691 16:26:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:57.691 16:26:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.691 16:26:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:57.691 16:26:05 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:57.691 16:26:05 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:57.691 16:26:05 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:57.691 16:26:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:57.691 16:26:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.691 16:26:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:14:57.691 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:14:57.691 16:26:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.691 16:26:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.691 16:26:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:14:57.691 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:14:57.691 16:26:05 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:57.691 16:26:05 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.691 16:26:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:57.692 16:26:05 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.692 16:26:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:57.692 16:26:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.692 16:26:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:57.692 Found net devices under 0000:18:00.0: mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.692 16:26:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.692 16:26:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:57.692 16:26:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.692 16:26:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:57.692 Found net devices under 0000:18:00.1: mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.692 16:26:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:57.692 16:26:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:57.692 16:26:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@409 -- # rdma_device_init 00:14:57.692 16:26:05 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:14:57.692 16:26:05 -- nvmf/common.sh@58 -- # uname 00:14:57.692 16:26:05 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:57.692 16:26:05 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:57.692 16:26:05 -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:57.692 16:26:05 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:57.692 16:26:05 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:57.692 16:26:05 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:57.692 16:26:05 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:57.692 16:26:05 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:57.692 16:26:05 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:14:57.692 16:26:05 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:57.692 16:26:05 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:57.692 16:26:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.692 16:26:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:57.692 16:26:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:57.692 16:26:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.692 16:26:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:57.692 16:26:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@105 -- # continue 2 00:14:57.692 16:26:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@105 -- # continue 2 00:14:57.692 16:26:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:57.692 16:26:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:57.692 16:26:05 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:57.692 16:26:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:57.692 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.692 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:14:57.692 altname enp24s0f0np0 00:14:57.692 altname ens785f0np0 00:14:57.692 inet 192.168.100.8/24 scope global mlx_0_0 00:14:57.692 valid_lft forever preferred_lft forever 00:14:57.692 16:26:05 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:57.692 16:26:05 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:57.692 16:26:05 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:57.692 16:26:05 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:57.692 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.692 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:14:57.692 altname enp24s0f1np1 00:14:57.692 altname ens785f1np1 00:14:57.692 inet 192.168.100.9/24 scope global mlx_0_1 00:14:57.692 valid_lft forever preferred_lft forever 00:14:57.692 16:26:05 -- nvmf/common.sh@411 -- # return 0 00:14:57.692 16:26:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:57.692 16:26:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:57.692 16:26:05 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:14:57.692 16:26:05 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:57.692 16:26:05 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.692 16:26:05 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:57.692 16:26:05 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:57.692 16:26:05 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.692 16:26:05 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:57.692 16:26:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@105 -- # continue 2 00:14:57.692 16:26:05 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.692 16:26:05 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.692 16:26:05 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@105 -- # continue 2 00:14:57.692 16:26:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:57.692 16:26:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:57.692 16:26:05 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:57.692 16:26:05 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:57.692 16:26:05 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:57.692 16:26:05 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:14:57.692 192.168.100.9' 00:14:57.692 16:26:05 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:14:57.692 192.168.100.9' 00:14:57.692 16:26:05 -- nvmf/common.sh@446 -- # head -n 1 00:14:57.692 16:26:05 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:57.692 16:26:05 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:14:57.692 192.168.100.9' 00:14:57.692 16:26:05 -- nvmf/common.sh@447 -- # tail -n +2 00:14:57.692 16:26:05 -- nvmf/common.sh@447 -- # head -n 1 00:14:57.692 16:26:05 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:57.692 16:26:05 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:14:57.692 16:26:05 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:57.692 16:26:05 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:14:57.692 16:26:05 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:14:57.692 16:26:05 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:14:57.692 16:26:06 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:57.692 16:26:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:57.692 16:26:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:57.692 16:26:06 -- common/autotest_common.sh@10 -- # set +x 00:14:57.692 16:26:06 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.692 16:26:06 -- nvmf/common.sh@470 -- # nvmfpid=462875 00:14:57.692 16:26:06 -- nvmf/common.sh@471 -- # waitforlisten 462875 00:14:57.692 16:26:06 -- common/autotest_common.sh@817 -- # '[' -z 462875 ']' 00:14:57.692 16:26:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.692 16:26:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.692 16:26:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.692 16:26:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.692 16:26:06 -- common/autotest_common.sh@10 -- # set +x 00:14:57.692 [2024-04-26 16:26:06.063400] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:14:57.692 [2024-04-26 16:26:06.063452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.692 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.692 [2024-04-26 16:26:06.136134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.692 [2024-04-26 16:26:06.217580] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.692 [2024-04-26 16:26:06.217624] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.692 [2024-04-26 16:26:06.217635] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.693 [2024-04-26 16:26:06.217644] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.693 [2024-04-26 16:26:06.217651] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.693 [2024-04-26 16:26:06.217699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.693 [2024-04-26 16:26:06.217789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.693 [2024-04-26 16:26:06.217872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.693 [2024-04-26 16:26:06.217874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.951 16:26:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:57.951 16:26:06 -- common/autotest_common.sh@850 -- # return 0 00:14:57.951 16:26:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:57.951 16:26:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:57.951 16:26:06 -- common/autotest_common.sh@10 -- # set +x 00:14:57.951 16:26:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.951 16:26:06 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:58.211 [2024-04-26 16:26:07.114716] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1650310/0x1654800) succeed. 00:14:58.211 [2024-04-26 16:26:07.125261] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1651950/0x1695e90) succeed. 00:14:58.470 16:26:07 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.470 16:26:07 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:58.470 16:26:07 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.730 16:26:07 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:58.730 16:26:07 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:58.988 16:26:07 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:58.988 16:26:07 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:59.246 16:26:08 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:59.246 16:26:08 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:59.505 16:26:08 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:59.505 16:26:08 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:59.505 16:26:08 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:59.765 16:26:08 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:59.765 16:26:08 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.024 16:26:08 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:00.024 16:26:08 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:00.283 16:26:09 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:00.283 16:26:09 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:00.283 16:26:09 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:00.543 16:26:09 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:00.543 16:26:09 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:00.802 16:26:09 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:00.802 [2024-04-26 16:26:09.776405] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:00.802 16:26:09 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:01.061 16:26:09 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:01.327 16:26:10 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:03.233 16:26:11 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:03.233 16:26:11 -- common/autotest_common.sh@1184 -- # local i=0 00:15:03.233 16:26:11 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.233 16:26:11 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:15:03.233 16:26:11 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:15:03.233 16:26:11 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:05.137 16:26:13 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:05.137 16:26:13 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:05.137 16:26:13 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.137 16:26:13 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:15:05.137 16:26:13 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.137 16:26:13 -- common/autotest_common.sh@1194 -- # return 0 00:15:05.137 16:26:13 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:05.137 [global] 00:15:05.137 thread=1 00:15:05.137 invalidate=1 00:15:05.137 rw=write 00:15:05.137 time_based=1 00:15:05.137 runtime=1 00:15:05.137 ioengine=libaio 00:15:05.137 direct=1 00:15:05.137 bs=4096 00:15:05.137 iodepth=1 00:15:05.137 norandommap=0 00:15:05.137 numjobs=1 00:15:05.137 00:15:05.137 verify_dump=1 00:15:05.137 verify_backlog=512 00:15:05.137 verify_state_save=0 00:15:05.137 do_verify=1 00:15:05.137 verify=crc32c-intel 00:15:05.137 [job0] 00:15:05.137 filename=/dev/nvme0n1 00:15:05.137 [job1] 00:15:05.137 filename=/dev/nvme0n2 00:15:05.137 [job2] 00:15:05.137 filename=/dev/nvme0n3 00:15:05.137 [job3] 00:15:05.137 filename=/dev/nvme0n4 00:15:05.137 Could not set queue depth (nvme0n1) 00:15:05.137 Could not set queue depth (nvme0n2) 00:15:05.137 Could not set queue depth (nvme0n3) 00:15:05.137 Could not set queue depth (nvme0n4) 00:15:05.137 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.137 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.137 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.137 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:05.137 fio-3.35 00:15:05.137 Starting 4 threads 00:15:06.516 00:15:06.516 job0: (groupid=0, jobs=1): err= 0: pid=464097: Fri Apr 26 16:26:15 2024 00:15:06.516 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:15:06.516 slat (nsec): min=5510, max=44478, avg=8751.00, stdev=1288.43 00:15:06.516 clat (usec): min=67, max=255, avg=86.02, stdev=10.58 00:15:06.516 lat (usec): min=73, max=272, avg=94.77, stdev=10.65 00:15:06.516 clat percentiles (usec): 00:15:06.516 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 80], 00:15:06.516 | 30.00th=[ 81], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 87], 00:15:06.516 | 70.00th=[ 88], 80.00th=[ 91], 90.00th=[ 96], 95.00th=[ 102], 00:15:06.516 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 149], 99.95th=[ 176], 00:15:06.516 | 99.99th=[ 255] 00:15:06.516 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:15:06.516 slat (nsec): min=7280, max=48496, avg=11112.13, stdev=1410.96 00:15:06.516 clat (usec): min=63, max=190, avg=85.68, stdev=15.98 00:15:06.516 lat (usec): min=73, max=201, avg=96.79, stdev=16.34 00:15:06.516 clat percentiles (usec): 00:15:06.516 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:15:06.516 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:15:06.516 | 70.00th=[ 86], 80.00th=[ 90], 90.00th=[ 113], 95.00th=[ 126], 00:15:06.516 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 157], 99.95th=[ 169], 00:15:06.516 | 99.99th=[ 190] 00:15:06.516 bw ( KiB/s): min=20480, max=20480, per=29.29%, avg=20480.00, stdev= 0.00, samples=1 00:15:06.516 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:06.516 lat (usec) : 100=90.63%, 250=9.36%, 500=0.01% 00:15:06.516 cpu : usr=4.90%, sys=11.60%, ctx=10230, majf=0, minf=1 00:15:06.516 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:06.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.516 issued rwts: total=5110,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.516 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:06.516 job1: (groupid=0, jobs=1): err= 0: pid=464099: Fri Apr 26 16:26:15 2024 00:15:06.516 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:15:06.516 slat (nsec): min=8402, max=31623, avg=9253.11, stdev=1043.75 00:15:06.516 clat (usec): min=70, max=321, avg=111.22, stdev=29.50 00:15:06.516 lat (usec): min=79, max=330, avg=120.48, stdev=29.50 00:15:06.516 clat percentiles (usec): 00:15:06.516 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:15:06.516 | 30.00th=[ 85], 40.00th=[ 89], 50.00th=[ 100], 60.00th=[ 130], 00:15:06.516 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:15:06.516 | 99.00th=[ 172], 99.50th=[ 188], 99.90th=[ 206], 99.95th=[ 239], 00:15:06.516 | 99.99th=[ 322] 00:15:06.517 write: IOPS=4182, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1001msec); 0 zone resets 00:15:06.517 slat (nsec): min=10338, max=55229, avg=11565.50, stdev=1684.40 00:15:06.517 clat (usec): min=65, max=288, avg=105.50, stdev=30.79 00:15:06.517 lat (usec): min=77, max=299, avg=117.06, stdev=30.95 00:15:06.517 clat percentiles (usec): 00:15:06.517 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:15:06.517 | 30.00th=[ 80], 40.00th=[ 83], 50.00th=[ 89], 60.00th=[ 124], 00:15:06.517 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:15:06.517 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 219], 99.95th=[ 225], 00:15:06.517 | 99.99th=[ 289] 00:15:06.517 bw ( KiB/s): min=16384, max=16384, per=23.43%, avg=16384.00, stdev= 0.00, samples=1 00:15:06.517 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:15:06.517 lat (usec) : 100=52.65%, 250=47.31%, 500=0.04% 00:15:06.517 cpu : usr=4.30%, sys=9.50%, ctx=8284, majf=0, minf=1 00:15:06.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:06.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.517 issued rwts: total=4096,4187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:06.517 job2: (groupid=0, jobs=1): err= 0: pid=464102: Fri Apr 26 16:26:15 2024 00:15:06.517 read: IOPS=3449, BW=13.5MiB/s (14.1MB/s)(13.5MiB/1001msec) 00:15:06.517 slat (nsec): min=8559, max=36582, avg=9430.17, stdev=1376.00 00:15:06.517 clat (usec): min=87, max=231, avg=131.23, stdev=17.53 00:15:06.517 lat (usec): min=96, max=240, avg=140.66, stdev=17.58 00:15:06.517 clat percentiles (usec): 00:15:06.517 | 1.00th=[ 95], 5.00th=[ 100], 10.00th=[ 104], 20.00th=[ 116], 00:15:06.517 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 135], 60.00th=[ 137], 00:15:06.517 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:15:06.517 | 99.00th=[ 178], 99.50th=[ 188], 99.90th=[ 200], 99.95th=[ 202], 00:15:06.517 | 99.99th=[ 231] 00:15:06.517 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:06.517 slat (nsec): min=10771, max=48286, avg=11865.70, stdev=1728.34 00:15:06.517 clat (usec): min=78, max=262, avg=127.62, stdev=17.41 00:15:06.517 lat (usec): min=89, max=273, avg=139.48, stdev=17.50 00:15:06.517 clat percentiles (usec): 00:15:06.517 | 1.00th=[ 89], 5.00th=[ 95], 10.00th=[ 101], 20.00th=[ 113], 00:15:06.517 | 30.00th=[ 122], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 135], 00:15:06.517 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 151], 00:15:06.517 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 196], 99.95th=[ 215], 00:15:06.517 | 99.99th=[ 265] 00:15:06.517 bw ( KiB/s): min=15432, max=15432, per=22.07%, avg=15432.00, stdev= 0.00, samples=1 00:15:06.517 iops : min= 3858, max= 3858, avg=3858.00, stdev= 0.00, samples=1 00:15:06.517 lat (usec) : 100=7.39%, 250=92.60%, 500=0.01% 00:15:06.517 cpu : usr=4.50%, sys=7.50%, ctx=7037, majf=0, minf=1 00:15:06.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:06.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.517 issued rwts: total=3453,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:06.517 job3: (groupid=0, jobs=1): err= 0: pid=464105: Fri Apr 26 16:26:15 2024 00:15:06.517 read: IOPS=4505, BW=17.6MiB/s (18.5MB/s)(17.6MiB/1001msec) 00:15:06.517 slat (nsec): min=4097, max=31505, avg=9023.12, stdev=1581.32 00:15:06.517 clat (usec): min=72, max=219, avg=98.18, stdev=18.57 00:15:06.517 lat (usec): min=78, max=225, avg=107.20, stdev=18.03 00:15:06.517 clat percentiles (usec): 00:15:06.517 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 88], 00:15:06.517 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 95], 00:15:06.517 | 70.00th=[ 98], 80.00th=[ 102], 90.00th=[ 121], 95.00th=[ 147], 00:15:06.517 | 99.00th=[ 165], 99.50th=[ 186], 99.90th=[ 210], 99.95th=[ 215], 00:15:06.517 | 99.99th=[ 221] 00:15:06.517 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:15:06.517 slat (nsec): min=5078, max=53230, avg=11470.70, stdev=2029.67 00:15:06.517 clat (usec): min=72, max=203, avg=96.76, stdev=19.95 00:15:06.517 lat (usec): min=83, max=214, avg=108.23, stdev=19.58 00:15:06.517 clat percentiles (usec): 00:15:06.517 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 84], 00:15:06.517 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 92], 00:15:06.517 | 70.00th=[ 96], 80.00th=[ 110], 90.00th=[ 130], 95.00th=[ 141], 00:15:06.517 | 99.00th=[ 157], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 196], 00:15:06.517 | 99.99th=[ 204] 00:15:06.517 bw ( KiB/s): min=20480, max=20480, per=29.29%, avg=20480.00, stdev= 0.00, samples=1 00:15:06.517 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:06.517 lat (usec) : 100=75.70%, 250=24.30% 00:15:06.517 cpu : usr=5.60%, sys=9.10%, ctx=9118, majf=0, minf=2 00:15:06.517 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:06.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.517 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.517 issued rwts: total=4510,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.517 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:06.517 00:15:06.517 Run status group 0 (all jobs): 00:15:06.517 READ: bw=67.0MiB/s (70.3MB/s), 13.5MiB/s-19.9MiB/s (14.1MB/s-20.9MB/s), io=67.1MiB (70.3MB), run=1001-1001msec 00:15:06.517 WRITE: bw=68.3MiB/s (71.6MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=68.4MiB (71.7MB), run=1001-1001msec 00:15:06.517 00:15:06.517 Disk stats (read/write): 00:15:06.517 nvme0n1: ios=4146/4321, merge=0/0, ticks=332/347, in_queue=679, util=83.77% 00:15:06.517 nvme0n2: ios=3072/3384, merge=0/0, ticks=350/336, in_queue=686, util=84.73% 00:15:06.517 nvme0n3: ios=2775/3072, merge=0/0, ticks=348/363, in_queue=711, util=88.17% 00:15:06.517 nvme0n4: ios=3754/4096, merge=0/0, ticks=329/353, in_queue=682, util=89.41% 00:15:06.517 16:26:15 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:06.517 [global] 00:15:06.517 thread=1 00:15:06.517 invalidate=1 00:15:06.517 rw=randwrite 00:15:06.517 time_based=1 00:15:06.517 runtime=1 00:15:06.517 ioengine=libaio 00:15:06.517 direct=1 00:15:06.517 bs=4096 00:15:06.517 iodepth=1 00:15:06.517 norandommap=0 00:15:06.517 numjobs=1 00:15:06.517 00:15:06.517 verify_dump=1 00:15:06.517 verify_backlog=512 00:15:06.517 verify_state_save=0 00:15:06.517 do_verify=1 00:15:06.517 verify=crc32c-intel 00:15:06.517 [job0] 00:15:06.517 filename=/dev/nvme0n1 00:15:06.517 [job1] 00:15:06.517 filename=/dev/nvme0n2 00:15:06.517 [job2] 00:15:06.517 filename=/dev/nvme0n3 00:15:06.517 [job3] 00:15:06.517 filename=/dev/nvme0n4 00:15:06.517 Could not set queue depth (nvme0n1) 00:15:06.517 Could not set queue depth (nvme0n2) 00:15:06.517 Could not set queue depth (nvme0n3) 00:15:06.517 Could not set queue depth (nvme0n4) 00:15:06.776 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.776 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.776 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.776 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.776 fio-3.35 00:15:06.776 Starting 4 threads 00:15:08.158 00:15:08.158 job0: (groupid=0, jobs=1): err= 0: pid=464421: Fri Apr 26 16:26:16 2024 00:15:08.158 read: IOPS=3469, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1001msec) 00:15:08.158 slat (nsec): min=8412, max=36004, avg=10097.74, stdev=1713.68 00:15:08.158 clat (usec): min=77, max=391, avg=132.70, stdev=22.26 00:15:08.158 lat (usec): min=86, max=400, avg=142.80, stdev=22.78 00:15:08.158 clat percentiles (usec): 00:15:08.158 | 1.00th=[ 91], 5.00th=[ 106], 10.00th=[ 112], 20.00th=[ 117], 00:15:08.158 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 129], 60.00th=[ 137], 00:15:08.158 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 167], 00:15:08.158 | 99.00th=[ 200], 99.50th=[ 227], 99.90th=[ 322], 99.95th=[ 367], 00:15:08.158 | 99.99th=[ 392] 00:15:08.158 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:08.158 slat (nsec): min=10294, max=38734, avg=12347.21, stdev=2018.14 00:15:08.158 clat (usec): min=73, max=449, avg=123.88, stdev=22.84 00:15:08.158 lat (usec): min=84, max=460, avg=136.23, stdev=23.50 00:15:08.158 clat percentiles (usec): 00:15:08.158 | 1.00th=[ 83], 5.00th=[ 94], 10.00th=[ 102], 20.00th=[ 109], 00:15:08.158 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 121], 60.00th=[ 126], 00:15:08.158 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 161], 00:15:08.158 | 99.00th=[ 192], 99.50th=[ 223], 99.90th=[ 330], 99.95th=[ 416], 00:15:08.158 | 99.99th=[ 449] 00:15:08.158 bw ( KiB/s): min=14680, max=14680, per=23.15%, avg=14680.00, stdev= 0.00, samples=1 00:15:08.158 iops : min= 3670, max= 3670, avg=3670.00, stdev= 0.00, samples=1 00:15:08.158 lat (usec) : 100=5.71%, 250=93.98%, 500=0.31% 00:15:08.158 cpu : usr=5.00%, sys=8.00%, ctx=7057, majf=0, minf=1 00:15:08.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.158 issued rwts: total=3473,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.158 job1: (groupid=0, jobs=1): err= 0: pid=464439: Fri Apr 26 16:26:16 2024 00:15:08.158 read: IOPS=3544, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1001msec) 00:15:08.158 slat (nsec): min=8474, max=34522, avg=10150.09, stdev=1734.82 00:15:08.158 clat (usec): min=71, max=443, avg=130.30, stdev=25.02 00:15:08.158 lat (usec): min=80, max=452, avg=140.45, stdev=25.52 00:15:08.158 clat percentiles (usec): 00:15:08.158 | 1.00th=[ 78], 5.00th=[ 89], 10.00th=[ 106], 20.00th=[ 115], 00:15:08.158 | 30.00th=[ 119], 40.00th=[ 123], 50.00th=[ 128], 60.00th=[ 137], 00:15:08.158 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 165], 00:15:08.158 | 99.00th=[ 206], 99.50th=[ 233], 99.90th=[ 343], 99.95th=[ 355], 00:15:08.158 | 99.99th=[ 445] 00:15:08.158 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:15:08.158 slat (nsec): min=10449, max=54079, avg=12491.73, stdev=2161.62 00:15:08.158 clat (usec): min=66, max=383, avg=123.01, stdev=26.20 00:15:08.158 lat (usec): min=77, max=394, avg=135.50, stdev=26.77 00:15:08.158 clat percentiles (usec): 00:15:08.158 | 1.00th=[ 75], 5.00th=[ 89], 10.00th=[ 97], 20.00th=[ 106], 00:15:08.158 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 120], 60.00th=[ 126], 00:15:08.158 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 151], 95.00th=[ 165], 00:15:08.158 | 99.00th=[ 212], 99.50th=[ 237], 99.90th=[ 363], 99.95th=[ 371], 00:15:08.158 | 99.99th=[ 383] 00:15:08.158 bw ( KiB/s): min=14578, max=14578, per=22.99%, avg=14578.00, stdev= 0.00, samples=1 00:15:08.158 iops : min= 3644, max= 3644, avg=3644.00, stdev= 0.00, samples=1 00:15:08.158 lat (usec) : 100=10.05%, 250=89.58%, 500=0.36% 00:15:08.158 cpu : usr=4.30%, sys=8.70%, ctx=7132, majf=0, minf=1 00:15:08.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.158 issued rwts: total=3548,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.158 job2: (groupid=0, jobs=1): err= 0: pid=464461: Fri Apr 26 16:26:16 2024 00:15:08.158 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:15:08.158 slat (nsec): min=8609, max=30798, avg=9468.15, stdev=1106.61 00:15:08.158 clat (usec): min=76, max=371, avg=122.78, stdev=25.17 00:15:08.158 lat (usec): min=85, max=380, avg=132.25, stdev=25.35 00:15:08.158 clat percentiles (usec): 00:15:08.158 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 95], 00:15:08.158 | 30.00th=[ 111], 40.00th=[ 118], 50.00th=[ 123], 60.00th=[ 130], 00:15:08.158 | 70.00th=[ 139], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:15:08.158 | 99.00th=[ 176], 99.50th=[ 217], 99.90th=[ 285], 99.95th=[ 326], 00:15:08.158 | 99.99th=[ 371] 00:15:08.158 write: IOPS=3895, BW=15.2MiB/s (16.0MB/s)(15.2MiB/1002msec); 0 zone resets 00:15:08.158 slat (nsec): min=10522, max=46126, avg=11532.97, stdev=1563.76 00:15:08.158 clat (usec): min=71, max=451, avg=119.39, stdev=27.63 00:15:08.158 lat (usec): min=83, max=462, avg=130.92, stdev=27.80 00:15:08.158 clat percentiles (usec): 00:15:08.158 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 90], 00:15:08.158 | 30.00th=[ 104], 40.00th=[ 116], 50.00th=[ 121], 60.00th=[ 127], 00:15:08.158 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 149], 95.00th=[ 159], 00:15:08.158 | 99.00th=[ 190], 99.50th=[ 227], 99.90th=[ 326], 99.95th=[ 400], 00:15:08.158 | 99.99th=[ 453] 00:15:08.158 bw ( KiB/s): min=16384, max=16384, per=25.84%, avg=16384.00, stdev= 0.00, samples=1 00:15:08.158 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:15:08.158 lat (usec) : 100=27.27%, 250=72.47%, 500=0.25% 00:15:08.158 cpu : usr=3.60%, sys=8.99%, ctx=7488, majf=0, minf=1 00:15:08.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.158 issued rwts: total=3584,3903,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.158 job3: (groupid=0, jobs=1): err= 0: pid=464469: Fri Apr 26 16:26:16 2024 00:15:08.158 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:15:08.158 slat (nsec): min=8526, max=23262, avg=9435.58, stdev=961.67 00:15:08.158 clat (usec): min=66, max=270, avg=96.40, stdev=15.56 00:15:08.158 lat (usec): min=76, max=279, avg=105.84, stdev=15.73 00:15:08.158 clat percentiles (usec): 00:15:08.158 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:15:08.158 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 91], 60.00th=[ 93], 00:15:08.158 | 70.00th=[ 97], 80.00th=[ 113], 90.00th=[ 121], 95.00th=[ 125], 00:15:08.158 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 217], 99.95th=[ 235], 00:15:08.158 | 99.99th=[ 273] 00:15:08.158 write: IOPS=4809, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1001msec); 0 zone resets 00:15:08.158 slat (nsec): min=6384, max=36976, avg=11401.07, stdev=1306.65 00:15:08.158 clat (usec): min=62, max=341, avg=91.24, stdev=14.84 00:15:08.158 lat (usec): min=74, max=353, avg=102.64, stdev=15.01 00:15:08.158 clat percentiles (usec): 00:15:08.158 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:15:08.158 | 30.00th=[ 84], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 89], 00:15:08.158 | 70.00th=[ 92], 80.00th=[ 102], 90.00th=[ 115], 95.00th=[ 120], 00:15:08.158 | 99.00th=[ 130], 99.50th=[ 139], 99.90th=[ 208], 99.95th=[ 237], 00:15:08.158 | 99.99th=[ 343] 00:15:08.158 bw ( KiB/s): min=20480, max=20480, per=32.30%, avg=20480.00, stdev= 0.00, samples=1 00:15:08.158 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:15:08.158 lat (usec) : 100=76.81%, 250=23.16%, 500=0.03% 00:15:08.158 cpu : usr=4.00%, sys=11.50%, ctx=9422, majf=0, minf=2 00:15:08.158 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.158 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.158 issued rwts: total=4608,4814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.158 00:15:08.158 Run status group 0 (all jobs): 00:15:08.158 READ: bw=59.3MiB/s (62.2MB/s), 13.6MiB/s-18.0MiB/s (14.2MB/s-18.9MB/s), io=59.4MiB (62.3MB), run=1001-1002msec 00:15:08.158 WRITE: bw=61.9MiB/s (64.9MB/s), 14.0MiB/s-18.8MiB/s (14.7MB/s-19.7MB/s), io=62.1MiB (65.1MB), run=1001-1002msec 00:15:08.159 00:15:08.159 Disk stats (read/write): 00:15:08.159 nvme0n1: ios=2753/3072, merge=0/0, ticks=381/375, in_queue=756, util=84.37% 00:15:08.159 nvme0n2: ios=2781/3072, merge=0/0, ticks=362/361, in_queue=723, util=85.00% 00:15:08.159 nvme0n3: ios=3072/3134, merge=0/0, ticks=366/357, in_queue=723, util=88.24% 00:15:08.159 nvme0n4: ios=4049/4096, merge=0/0, ticks=358/330, in_queue=688, util=89.38% 00:15:08.159 16:26:16 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:08.159 [global] 00:15:08.159 thread=1 00:15:08.159 invalidate=1 00:15:08.159 rw=write 00:15:08.159 time_based=1 00:15:08.159 runtime=1 00:15:08.159 ioengine=libaio 00:15:08.159 direct=1 00:15:08.159 bs=4096 00:15:08.159 iodepth=128 00:15:08.159 norandommap=0 00:15:08.159 numjobs=1 00:15:08.159 00:15:08.159 verify_dump=1 00:15:08.159 verify_backlog=512 00:15:08.159 verify_state_save=0 00:15:08.159 do_verify=1 00:15:08.159 verify=crc32c-intel 00:15:08.159 [job0] 00:15:08.159 filename=/dev/nvme0n1 00:15:08.159 [job1] 00:15:08.159 filename=/dev/nvme0n2 00:15:08.159 [job2] 00:15:08.159 filename=/dev/nvme0n3 00:15:08.159 [job3] 00:15:08.159 filename=/dev/nvme0n4 00:15:08.159 Could not set queue depth (nvme0n1) 00:15:08.159 Could not set queue depth (nvme0n2) 00:15:08.159 Could not set queue depth (nvme0n3) 00:15:08.159 Could not set queue depth (nvme0n4) 00:15:08.417 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.417 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.417 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.417 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:08.417 fio-3.35 00:15:08.417 Starting 4 threads 00:15:09.793 00:15:09.793 job0: (groupid=0, jobs=1): err= 0: pid=464836: Fri Apr 26 16:26:18 2024 00:15:09.793 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:15:09.793 slat (nsec): min=1991, max=11139k, avg=73737.85, stdev=373621.07 00:15:09.793 clat (usec): min=1086, max=25705, avg=9831.23, stdev=3866.44 00:15:09.793 lat (usec): min=1109, max=25708, avg=9904.97, stdev=3879.78 00:15:09.793 clat percentiles (usec): 00:15:09.793 | 1.00th=[ 3261], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 6128], 00:15:09.793 | 30.00th=[ 7111], 40.00th=[ 8455], 50.00th=[ 9503], 60.00th=[10683], 00:15:09.793 | 70.00th=[11994], 80.00th=[13435], 90.00th=[14746], 95.00th=[16319], 00:15:09.793 | 99.00th=[19268], 99.50th=[22152], 99.90th=[25560], 99.95th=[25822], 00:15:09.793 | 99.99th=[25822] 00:15:09.793 write: IOPS=6680, BW=26.1MiB/s (27.4MB/s)(26.1MiB/1002msec); 0 zone resets 00:15:09.793 slat (usec): min=2, max=5528, avg=70.42, stdev=341.45 00:15:09.793 clat (usec): min=220, max=22785, avg=9188.96, stdev=3610.60 00:15:09.793 lat (usec): min=2484, max=22816, avg=9259.38, stdev=3628.75 00:15:09.793 clat percentiles (usec): 00:15:09.793 | 1.00th=[ 3752], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5866], 00:15:09.793 | 30.00th=[ 6390], 40.00th=[ 7504], 50.00th=[ 8586], 60.00th=[ 9896], 00:15:09.793 | 70.00th=[11207], 80.00th=[12387], 90.00th=[14353], 95.00th=[15795], 00:15:09.793 | 99.00th=[18744], 99.50th=[18744], 99.90th=[22152], 99.95th=[22676], 00:15:09.793 | 99.99th=[22676] 00:15:09.793 bw ( KiB/s): min=24576, max=28672, per=26.81%, avg=26624.00, stdev=2896.31, samples=2 00:15:09.793 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:15:09.793 lat (usec) : 250=0.01% 00:15:09.793 lat (msec) : 2=0.11%, 4=1.97%, 10=55.58%, 20=41.96%, 50=0.37% 00:15:09.793 cpu : usr=4.40%, sys=7.89%, ctx=1231, majf=0, minf=1 00:15:09.793 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:09.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.793 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.793 issued rwts: total=6656,6694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.793 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.793 job1: (groupid=0, jobs=1): err= 0: pid=464851: Fri Apr 26 16:26:18 2024 00:15:09.793 read: IOPS=6380, BW=24.9MiB/s (26.1MB/s)(25.1MiB/1008msec) 00:15:09.793 slat (usec): min=2, max=5940, avg=73.84, stdev=364.61 00:15:09.793 clat (usec): min=2552, max=25847, avg=9977.75, stdev=3746.49 00:15:09.793 lat (usec): min=2561, max=25880, avg=10051.59, stdev=3762.03 00:15:09.793 clat percentiles (usec): 00:15:09.793 | 1.00th=[ 4293], 5.00th=[ 5080], 10.00th=[ 5538], 20.00th=[ 6259], 00:15:09.793 | 30.00th=[ 7373], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[10683], 00:15:09.793 | 70.00th=[11863], 80.00th=[13173], 90.00th=[14877], 95.00th=[16581], 00:15:09.793 | 99.00th=[20841], 99.50th=[22152], 99.90th=[23462], 99.95th=[23462], 00:15:09.793 | 99.99th=[25822] 00:15:09.793 write: IOPS=6603, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1008msec); 0 zone resets 00:15:09.793 slat (usec): min=2, max=5748, avg=73.38, stdev=379.64 00:15:09.793 clat (usec): min=3069, max=25895, avg=9543.80, stdev=4170.98 00:15:09.793 lat (usec): min=3073, max=25912, avg=9617.18, stdev=4191.78 00:15:09.793 clat percentiles (usec): 00:15:09.793 | 1.00th=[ 3752], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5866], 00:15:09.793 | 30.00th=[ 6587], 40.00th=[ 7767], 50.00th=[ 8717], 60.00th=[10028], 00:15:09.793 | 70.00th=[11076], 80.00th=[12518], 90.00th=[15926], 95.00th=[18220], 00:15:09.793 | 99.00th=[21627], 99.50th=[22414], 99.90th=[22938], 99.95th=[22938], 00:15:09.793 | 99.99th=[25822] 00:15:09.793 bw ( KiB/s): min=26448, max=26800, per=26.81%, avg=26624.00, stdev=248.90, samples=2 00:15:09.794 iops : min= 6612, max= 6700, avg=6656.00, stdev=62.23, samples=2 00:15:09.794 lat (msec) : 4=1.18%, 10=56.30%, 20=40.46%, 50=2.06% 00:15:09.794 cpu : usr=5.26%, sys=6.95%, ctx=1291, majf=0, minf=1 00:15:09.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:09.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.794 issued rwts: total=6432,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.794 job2: (groupid=0, jobs=1): err= 0: pid=464869: Fri Apr 26 16:26:18 2024 00:15:09.794 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:15:09.794 slat (usec): min=2, max=7441, avg=90.57, stdev=394.80 00:15:09.794 clat (usec): min=3345, max=24549, avg=12150.50, stdev=4359.83 00:15:09.794 lat (usec): min=3425, max=28469, avg=12241.07, stdev=4380.59 00:15:09.794 clat percentiles (usec): 00:15:09.794 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 6783], 20.00th=[ 7963], 00:15:09.794 | 30.00th=[ 8848], 40.00th=[10159], 50.00th=[12125], 60.00th=[13435], 00:15:09.794 | 70.00th=[14484], 80.00th=[16319], 90.00th=[18220], 95.00th=[19530], 00:15:09.794 | 99.00th=[22414], 99.50th=[22938], 99.90th=[24511], 99.95th=[24511], 00:15:09.794 | 99.99th=[24511] 00:15:09.794 write: IOPS=5553, BW=21.7MiB/s (22.7MB/s)(21.9MiB/1008msec); 0 zone resets 00:15:09.794 slat (usec): min=2, max=10915, avg=90.35, stdev=404.94 00:15:09.794 clat (usec): min=2850, max=27522, avg=11661.46, stdev=4263.62 00:15:09.794 lat (usec): min=2926, max=28097, avg=11751.82, stdev=4288.49 00:15:09.794 clat percentiles (usec): 00:15:09.794 | 1.00th=[ 4359], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 7635], 00:15:09.794 | 30.00th=[ 8848], 40.00th=[10683], 50.00th=[11863], 60.00th=[13042], 00:15:09.794 | 70.00th=[14091], 80.00th=[15664], 90.00th=[17433], 95.00th=[18220], 00:15:09.794 | 99.00th=[19268], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:15:09.794 | 99.99th=[27395] 00:15:09.794 bw ( KiB/s): min=20480, max=23280, per=22.03%, avg=21880.00, stdev=1979.90, samples=2 00:15:09.794 iops : min= 5120, max= 5820, avg=5470.00, stdev=494.97, samples=2 00:15:09.794 lat (msec) : 4=0.15%, 10=37.49%, 20=59.83%, 50=2.53% 00:15:09.794 cpu : usr=3.57%, sys=6.75%, ctx=1145, majf=0, minf=1 00:15:09.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:09.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.794 issued rwts: total=5120,5598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.794 job3: (groupid=0, jobs=1): err= 0: pid=464871: Fri Apr 26 16:26:18 2024 00:15:09.794 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:15:09.794 slat (usec): min=2, max=5261, avg=80.96, stdev=397.92 00:15:09.794 clat (usec): min=3522, max=21727, avg=10674.72, stdev=3171.84 00:15:09.794 lat (usec): min=3770, max=21735, avg=10755.67, stdev=3183.28 00:15:09.794 clat percentiles (usec): 00:15:09.794 | 1.00th=[ 5342], 5.00th=[ 6259], 10.00th=[ 6915], 20.00th=[ 7701], 00:15:09.794 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[10552], 60.00th=[11207], 00:15:09.794 | 70.00th=[12125], 80.00th=[13173], 90.00th=[15139], 95.00th=[16581], 00:15:09.794 | 99.00th=[19006], 99.50th=[20579], 99.90th=[21627], 99.95th=[21627], 00:15:09.794 | 99.99th=[21627] 00:15:09.794 write: IOPS=6026, BW=23.5MiB/s (24.7MB/s)(23.7MiB/1008msec); 0 zone resets 00:15:09.794 slat (usec): min=2, max=5960, avg=84.00, stdev=375.20 00:15:09.794 clat (usec): min=3454, max=19361, avg=11087.75, stdev=3462.67 00:15:09.794 lat (usec): min=3457, max=19857, avg=11171.75, stdev=3477.35 00:15:09.794 clat percentiles (usec): 00:15:09.794 | 1.00th=[ 4555], 5.00th=[ 5866], 10.00th=[ 6783], 20.00th=[ 7832], 00:15:09.794 | 30.00th=[ 8586], 40.00th=[ 9896], 50.00th=[11076], 60.00th=[11994], 00:15:09.794 | 70.00th=[12911], 80.00th=[14222], 90.00th=[15926], 95.00th=[17171], 00:15:09.794 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19268], 99.95th=[19268], 00:15:09.794 | 99.99th=[19268] 00:15:09.794 bw ( KiB/s): min=23000, max=24576, per=23.96%, avg=23788.00, stdev=1114.40, samples=2 00:15:09.794 iops : min= 5750, max= 6144, avg=5947.00, stdev=278.60, samples=2 00:15:09.794 lat (msec) : 4=0.26%, 10=42.78%, 20=56.62%, 50=0.34% 00:15:09.794 cpu : usr=3.97%, sys=6.75%, ctx=1150, majf=0, minf=1 00:15:09.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:09.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:09.794 issued rwts: total=5632,6075,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:09.794 00:15:09.794 Run status group 0 (all jobs): 00:15:09.794 READ: bw=92.4MiB/s (96.9MB/s), 19.8MiB/s-25.9MiB/s (20.8MB/s-27.2MB/s), io=93.1MiB (97.6MB), run=1002-1008msec 00:15:09.794 WRITE: bw=97.0MiB/s (102MB/s), 21.7MiB/s-26.1MiB/s (22.7MB/s-27.4MB/s), io=97.7MiB (102MB), run=1002-1008msec 00:15:09.794 00:15:09.794 Disk stats (read/write): 00:15:09.794 nvme0n1: ios=5260/5632, merge=0/0, ticks=15241/16070, in_queue=31311, util=84.87% 00:15:09.794 nvme0n2: ios=5545/5632, merge=0/0, ticks=16206/15917, in_queue=32123, util=85.05% 00:15:09.794 nvme0n3: ios=4127/4608, merge=0/0, ticks=14645/15719, in_queue=30364, util=88.38% 00:15:09.794 nvme0n4: ios=4899/5120, merge=0/0, ticks=14960/16103, in_queue=31063, util=88.37% 00:15:09.794 16:26:18 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:09.794 [global] 00:15:09.794 thread=1 00:15:09.794 invalidate=1 00:15:09.794 rw=randwrite 00:15:09.794 time_based=1 00:15:09.794 runtime=1 00:15:09.794 ioengine=libaio 00:15:09.794 direct=1 00:15:09.794 bs=4096 00:15:09.794 iodepth=128 00:15:09.794 norandommap=0 00:15:09.794 numjobs=1 00:15:09.794 00:15:09.794 verify_dump=1 00:15:09.794 verify_backlog=512 00:15:09.794 verify_state_save=0 00:15:09.794 do_verify=1 00:15:09.794 verify=crc32c-intel 00:15:09.794 [job0] 00:15:09.794 filename=/dev/nvme0n1 00:15:09.794 [job1] 00:15:09.794 filename=/dev/nvme0n2 00:15:09.794 [job2] 00:15:09.794 filename=/dev/nvme0n3 00:15:09.794 [job3] 00:15:09.794 filename=/dev/nvme0n4 00:15:09.794 Could not set queue depth (nvme0n1) 00:15:09.794 Could not set queue depth (nvme0n2) 00:15:09.794 Could not set queue depth (nvme0n3) 00:15:09.794 Could not set queue depth (nvme0n4) 00:15:09.794 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.794 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.794 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.794 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.794 fio-3.35 00:15:09.794 Starting 4 threads 00:15:11.201 00:15:11.201 job0: (groupid=0, jobs=1): err= 0: pid=465170: Fri Apr 26 16:26:19 2024 00:15:11.201 read: IOPS=7668, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:15:11.201 slat (usec): min=2, max=5438, avg=57.19, stdev=274.45 00:15:11.201 clat (usec): min=2027, max=18006, avg=7557.29, stdev=2617.07 00:15:11.201 lat (usec): min=2572, max=18009, avg=7614.48, stdev=2629.36 00:15:11.201 clat percentiles (usec): 00:15:11.201 | 1.00th=[ 3851], 5.00th=[ 4621], 10.00th=[ 4948], 20.00th=[ 5473], 00:15:11.201 | 30.00th=[ 5932], 40.00th=[ 6259], 50.00th=[ 6587], 60.00th=[ 7111], 00:15:11.202 | 70.00th=[ 8291], 80.00th=[10028], 90.00th=[11600], 95.00th=[12649], 00:15:11.202 | 99.00th=[14746], 99.50th=[15139], 99.90th=[16909], 99.95th=[16909], 00:15:11.202 | 99.99th=[17957] 00:15:11.202 write: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec); 0 zone resets 00:15:11.202 slat (usec): min=2, max=5975, avg=63.14, stdev=282.16 00:15:11.202 clat (usec): min=2430, max=23199, avg=8391.96, stdev=3899.57 00:15:11.202 lat (usec): min=2505, max=23203, avg=8455.10, stdev=3919.24 00:15:11.202 clat percentiles (usec): 00:15:11.202 | 1.00th=[ 3752], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 5211], 00:15:11.202 | 30.00th=[ 5932], 40.00th=[ 6259], 50.00th=[ 6783], 60.00th=[ 7898], 00:15:11.202 | 70.00th=[ 9241], 80.00th=[11338], 90.00th=[14877], 95.00th=[16712], 00:15:11.202 | 99.00th=[19268], 99.50th=[19792], 99.90th=[23200], 99.95th=[23200], 00:15:11.202 | 99.99th=[23200] 00:15:11.202 bw ( KiB/s): min=31488, max=33128, per=33.60%, avg=32308.00, stdev=1159.66, samples=2 00:15:11.202 iops : min= 7872, max= 8282, avg=8077.00, stdev=289.91, samples=2 00:15:11.202 lat (msec) : 4=1.68%, 10=75.28%, 20=22.82%, 50=0.23% 00:15:11.202 cpu : usr=5.79%, sys=8.38%, ctx=1654, majf=0, minf=1 00:15:11.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:11.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.202 issued rwts: total=7692,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.202 job1: (groupid=0, jobs=1): err= 0: pid=465171: Fri Apr 26 16:26:19 2024 00:15:11.202 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:15:11.202 slat (usec): min=2, max=4852, avg=81.89, stdev=380.31 00:15:11.202 clat (usec): min=3506, max=21442, avg=10703.11, stdev=3489.36 00:15:11.202 lat (usec): min=3513, max=21446, avg=10785.00, stdev=3501.55 00:15:11.202 clat percentiles (usec): 00:15:11.202 | 1.00th=[ 4359], 5.00th=[ 5276], 10.00th=[ 6259], 20.00th=[ 7504], 00:15:11.202 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11338], 00:15:11.202 | 70.00th=[12387], 80.00th=[13566], 90.00th=[15270], 95.00th=[16909], 00:15:11.202 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:15:11.202 | 99.99th=[21365] 00:15:11.202 write: IOPS=5683, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1003msec); 0 zone resets 00:15:11.202 slat (usec): min=2, max=5515, avg=89.42, stdev=373.12 00:15:11.202 clat (usec): min=1422, max=22402, avg=11657.19, stdev=3944.45 00:15:11.202 lat (usec): min=2861, max=23064, avg=11746.61, stdev=3960.73 00:15:11.202 clat percentiles (usec): 00:15:11.202 | 1.00th=[ 3818], 5.00th=[ 5080], 10.00th=[ 6390], 20.00th=[ 8356], 00:15:11.202 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11469], 60.00th=[12387], 00:15:11.202 | 70.00th=[13566], 80.00th=[14877], 90.00th=[16909], 95.00th=[19268], 00:15:11.202 | 99.00th=[20841], 99.50th=[22152], 99.90th=[22414], 99.95th=[22414], 00:15:11.202 | 99.99th=[22414] 00:15:11.202 bw ( KiB/s): min=20480, max=24576, per=23.43%, avg=22528.00, stdev=2896.31, samples=2 00:15:11.202 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:15:11.202 lat (msec) : 2=0.01%, 4=0.77%, 10=38.34%, 20=59.38%, 50=1.51% 00:15:11.202 cpu : usr=3.09%, sys=6.89%, ctx=1387, majf=0, minf=1 00:15:11.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:11.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.202 issued rwts: total=5632,5701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.202 job2: (groupid=0, jobs=1): err= 0: pid=465172: Fri Apr 26 16:26:19 2024 00:15:11.202 read: IOPS=4364, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1004msec) 00:15:11.202 slat (usec): min=2, max=5591, avg=100.53, stdev=451.97 00:15:11.202 clat (usec): min=2823, max=23070, avg=13447.67, stdev=4105.51 00:15:11.202 lat (usec): min=2830, max=23084, avg=13548.20, stdev=4123.81 00:15:11.202 clat percentiles (usec): 00:15:11.202 | 1.00th=[ 5145], 5.00th=[ 6456], 10.00th=[ 7898], 20.00th=[ 9634], 00:15:11.202 | 30.00th=[11338], 40.00th=[12387], 50.00th=[13566], 60.00th=[14484], 00:15:11.202 | 70.00th=[15401], 80.00th=[17171], 90.00th=[19268], 95.00th=[20317], 00:15:11.202 | 99.00th=[22152], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:15:11.202 | 99.99th=[23200] 00:15:11.202 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:15:11.202 slat (usec): min=2, max=7232, avg=115.83, stdev=500.12 00:15:11.202 clat (usec): min=4821, max=25432, avg=14774.17, stdev=3196.44 00:15:11.202 lat (usec): min=4830, max=25447, avg=14890.01, stdev=3196.69 00:15:11.202 clat percentiles (usec): 00:15:11.202 | 1.00th=[ 7111], 5.00th=[10028], 10.00th=[10945], 20.00th=[12518], 00:15:11.202 | 30.00th=[13304], 40.00th=[13960], 50.00th=[14353], 60.00th=[15139], 00:15:11.202 | 70.00th=[16057], 80.00th=[17171], 90.00th=[19006], 95.00th=[20841], 00:15:11.202 | 99.00th=[23725], 99.50th=[23725], 99.90th=[25297], 99.95th=[25560], 00:15:11.202 | 99.99th=[25560] 00:15:11.202 bw ( KiB/s): min=17720, max=19144, per=19.17%, avg=18432.00, stdev=1006.92, samples=2 00:15:11.202 iops : min= 4430, max= 4786, avg=4608.00, stdev=251.73, samples=2 00:15:11.202 lat (msec) : 4=0.13%, 10=13.15%, 20=80.01%, 50=6.71% 00:15:11.202 cpu : usr=3.69%, sys=4.79%, ctx=994, majf=0, minf=1 00:15:11.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:15:11.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.202 issued rwts: total=4382,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.202 job3: (groupid=0, jobs=1): err= 0: pid=465173: Fri Apr 26 16:26:19 2024 00:15:11.202 read: IOPS=5425, BW=21.2MiB/s (22.2MB/s)(21.3MiB/1004msec) 00:15:11.202 slat (usec): min=2, max=6021, avg=90.52, stdev=405.38 00:15:11.202 clat (usec): min=2125, max=23096, avg=11571.77, stdev=4061.63 00:15:11.202 lat (usec): min=3278, max=23752, avg=11662.29, stdev=4080.98 00:15:11.202 clat percentiles (usec): 00:15:11.202 | 1.00th=[ 4555], 5.00th=[ 5997], 10.00th=[ 7177], 20.00th=[ 8291], 00:15:11.202 | 30.00th=[ 8848], 40.00th=[ 9634], 50.00th=[10945], 60.00th=[12125], 00:15:11.202 | 70.00th=[13435], 80.00th=[14746], 90.00th=[17171], 95.00th=[19792], 00:15:11.202 | 99.00th=[21890], 99.50th=[22676], 99.90th=[22938], 99.95th=[23200], 00:15:11.202 | 99.99th=[23200] 00:15:11.202 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:15:11.202 slat (usec): min=2, max=4852, avg=84.71, stdev=366.54 00:15:11.202 clat (usec): min=2787, max=22169, avg=11338.01, stdev=4082.29 00:15:11.202 lat (usec): min=3454, max=22174, avg=11422.72, stdev=4104.82 00:15:11.202 clat percentiles (usec): 00:15:11.202 | 1.00th=[ 4555], 5.00th=[ 5473], 10.00th=[ 6194], 20.00th=[ 7898], 00:15:11.202 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11994], 00:15:11.202 | 70.00th=[12911], 80.00th=[14353], 90.00th=[17171], 95.00th=[20055], 00:15:11.202 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22152], 99.95th=[22152], 00:15:11.202 | 99.99th=[22152] 00:15:11.202 bw ( KiB/s): min=20480, max=24576, per=23.43%, avg=22528.00, stdev=2896.31, samples=2 00:15:11.202 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:15:11.202 lat (msec) : 4=0.42%, 10=42.19%, 20=52.59%, 50=4.80% 00:15:11.202 cpu : usr=2.99%, sys=6.98%, ctx=1226, majf=0, minf=1 00:15:11.202 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:15:11.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.202 issued rwts: total=5447,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.202 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.202 00:15:11.202 Run status group 0 (all jobs): 00:15:11.202 READ: bw=90.1MiB/s (94.5MB/s), 17.0MiB/s-30.0MiB/s (17.9MB/s-31.4MB/s), io=90.4MiB (94.8MB), run=1003-1004msec 00:15:11.202 WRITE: bw=93.9MiB/s (98.5MB/s), 17.9MiB/s-31.9MiB/s (18.8MB/s-33.5MB/s), io=94.3MiB (98.8MB), run=1003-1004msec 00:15:11.202 00:15:11.202 Disk stats (read/write): 00:15:11.202 nvme0n1: ios=6964/7168, merge=0/0, ticks=13795/15198, in_queue=28993, util=86.17% 00:15:11.202 nvme0n2: ios=4564/4608, merge=0/0, ticks=14784/15832, in_queue=30616, util=85.96% 00:15:11.202 nvme0n3: ios=3584/3811, merge=0/0, ticks=14049/15188, in_queue=29237, util=88.73% 00:15:11.202 nvme0n4: ios=4329/4608, merge=0/0, ticks=14563/15342, in_queue=29905, util=89.26% 00:15:11.202 16:26:20 -- target/fio.sh@55 -- # sync 00:15:11.203 16:26:20 -- target/fio.sh@59 -- # fio_pid=465361 00:15:11.203 16:26:20 -- target/fio.sh@61 -- # sleep 3 00:15:11.203 16:26:20 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:11.203 [global] 00:15:11.203 thread=1 00:15:11.203 invalidate=1 00:15:11.203 rw=read 00:15:11.203 time_based=1 00:15:11.203 runtime=10 00:15:11.203 ioengine=libaio 00:15:11.203 direct=1 00:15:11.203 bs=4096 00:15:11.203 iodepth=1 00:15:11.203 norandommap=1 00:15:11.203 numjobs=1 00:15:11.203 00:15:11.203 [job0] 00:15:11.203 filename=/dev/nvme0n1 00:15:11.203 [job1] 00:15:11.203 filename=/dev/nvme0n2 00:15:11.203 [job2] 00:15:11.203 filename=/dev/nvme0n3 00:15:11.203 [job3] 00:15:11.203 filename=/dev/nvme0n4 00:15:11.203 Could not set queue depth (nvme0n1) 00:15:11.203 Could not set queue depth (nvme0n2) 00:15:11.203 Could not set queue depth (nvme0n3) 00:15:11.203 Could not set queue depth (nvme0n4) 00:15:11.460 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.460 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.461 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.461 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:11.461 fio-3.35 00:15:11.461 Starting 4 threads 00:15:14.036 16:26:23 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:14.296 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=89722880, buflen=4096 00:15:14.296 fio: pid=465476, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:14.296 16:26:23 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:14.554 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=83824640, buflen=4096 00:15:14.554 fio: pid=465475, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:14.554 16:26:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.554 16:26:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:14.554 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=15605760, buflen=4096 00:15:14.554 fio: pid=465473, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:14.813 16:26:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.813 16:26:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:14.813 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=25530368, buflen=4096 00:15:14.813 fio: pid=465474, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:14.813 16:26:23 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:14.813 16:26:23 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:14.813 00:15:14.813 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=465473: Fri Apr 26 16:26:23 2024 00:15:14.813 read: IOPS=6565, BW=25.6MiB/s (26.9MB/s)(78.9MiB/3076msec) 00:15:14.813 slat (usec): min=5, max=13992, avg=11.51, stdev=168.50 00:15:14.813 clat (usec): min=51, max=320, avg=138.51, stdev=33.33 00:15:14.813 lat (usec): min=60, max=14063, avg=150.02, stdev=171.31 00:15:14.813 clat percentiles (usec): 00:15:14.813 | 1.00th=[ 62], 5.00th=[ 78], 10.00th=[ 85], 20.00th=[ 124], 00:15:14.813 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 139], 00:15:14.813 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:15:14.813 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 243], 99.95th=[ 245], 00:15:14.813 | 99.99th=[ 262] 00:15:14.813 bw ( KiB/s): min=23144, max=27288, per=24.01%, avg=25116.80, stdev=1604.91, samples=5 00:15:14.813 iops : min= 5786, max= 6822, avg=6279.20, stdev=401.23, samples=5 00:15:14.813 lat (usec) : 100=13.66%, 250=86.31%, 500=0.02% 00:15:14.813 cpu : usr=2.15%, sys=7.38%, ctx=20202, majf=0, minf=1 00:15:14.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.813 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.813 issued rwts: total=20195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.813 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=465474: Fri Apr 26 16:26:23 2024 00:15:14.813 read: IOPS=6944, BW=27.1MiB/s (28.4MB/s)(88.3MiB/3257msec) 00:15:14.813 slat (usec): min=5, max=25947, avg=13.09, stdev=268.52 00:15:14.813 clat (usec): min=47, max=266, avg=129.45, stdev=39.30 00:15:14.813 lat (usec): min=53, max=26041, avg=142.54, stdev=270.95 00:15:14.813 clat percentiles (usec): 00:15:14.813 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 85], 00:15:14.813 | 30.00th=[ 124], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:15:14.813 | 70.00th=[ 147], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:15:14.813 | 99.00th=[ 212], 99.50th=[ 225], 99.90th=[ 235], 99.95th=[ 241], 00:15:14.813 | 99.99th=[ 253] 00:15:14.813 bw ( KiB/s): min=23672, max=31720, per=25.22%, avg=26386.67, stdev=2924.17, samples=6 00:15:14.813 iops : min= 5918, max= 7930, avg=6596.67, stdev=731.04, samples=6 00:15:14.813 lat (usec) : 50=0.03%, 100=22.38%, 250=77.57%, 500=0.01% 00:15:14.813 cpu : usr=2.12%, sys=7.83%, ctx=22625, majf=0, minf=1 00:15:14.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.813 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.813 issued rwts: total=22618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.813 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=465475: Fri Apr 26 16:26:23 2024 00:15:14.813 read: IOPS=7108, BW=27.8MiB/s (29.1MB/s)(79.9MiB/2879msec) 00:15:14.813 slat (usec): min=2, max=15918, avg= 9.97, stdev=138.61 00:15:14.813 clat (usec): min=66, max=472, avg=129.26, stdev=33.44 00:15:14.813 lat (usec): min=71, max=16001, avg=139.23, stdev=142.28 00:15:14.813 clat percentiles (usec): 00:15:14.813 | 1.00th=[ 75], 5.00th=[ 80], 10.00th=[ 83], 20.00th=[ 90], 00:15:14.813 | 30.00th=[ 122], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:15:14.813 | 70.00th=[ 137], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 182], 00:15:14.813 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 231], 99.95th=[ 235], 00:15:14.813 | 99.99th=[ 404] 00:15:14.813 bw ( KiB/s): min=24144, max=33944, per=26.61%, avg=27841.60, stdev=3863.99, samples=5 00:15:14.813 iops : min= 6036, max= 8486, avg=6960.40, stdev=966.00, samples=5 00:15:14.813 lat (usec) : 100=24.70%, 250=75.27%, 500=0.02% 00:15:14.813 cpu : usr=2.29%, sys=7.33%, ctx=20470, majf=0, minf=1 00:15:14.813 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.813 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.813 issued rwts: total=20466,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.813 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.813 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=465476: Fri Apr 26 16:26:23 2024 00:15:14.814 read: IOPS=8152, BW=31.8MiB/s (33.4MB/s)(85.6MiB/2687msec) 00:15:14.814 slat (nsec): min=8284, max=36602, avg=9009.26, stdev=1145.81 00:15:14.814 clat (usec): min=70, max=414, avg=112.14, stdev=27.93 00:15:14.814 lat (usec): min=78, max=424, avg=121.15, stdev=28.15 00:15:14.814 clat percentiles (usec): 00:15:14.814 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 82], 20.00th=[ 85], 00:15:14.814 | 30.00th=[ 88], 40.00th=[ 92], 50.00th=[ 121], 60.00th=[ 126], 00:15:14.814 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 172], 00:15:14.814 | 99.00th=[ 186], 99.50th=[ 200], 99.90th=[ 225], 99.95th=[ 235], 00:15:14.814 | 99.99th=[ 273] 00:15:14.814 bw ( KiB/s): min=26160, max=37752, per=31.43%, avg=32878.40, stdev=4919.89, samples=5 00:15:14.814 iops : min= 6540, max= 9438, avg=8219.60, stdev=1229.97, samples=5 00:15:14.814 lat (usec) : 100=45.33%, 250=54.65%, 500=0.01% 00:15:14.814 cpu : usr=2.68%, sys=8.97%, ctx=21907, majf=0, minf=2 00:15:14.814 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.814 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.814 issued rwts: total=21906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.814 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:14.814 00:15:14.814 Run status group 0 (all jobs): 00:15:14.814 READ: bw=102MiB/s (107MB/s), 25.6MiB/s-31.8MiB/s (26.9MB/s-33.4MB/s), io=333MiB (349MB), run=2687-3257msec 00:15:14.814 00:15:14.814 Disk stats (read/write): 00:15:14.814 nvme0n1: ios=18293/0, merge=0/0, ticks=2557/0, in_queue=2557, util=94.62% 00:15:14.814 nvme0n2: ios=20583/0, merge=0/0, ticks=2698/0, in_queue=2698, util=93.38% 00:15:14.814 nvme0n3: ios=20267/0, merge=0/0, ticks=2509/0, in_queue=2509, util=95.61% 00:15:14.814 nvme0n4: ios=21370/0, merge=0/0, ticks=2262/0, in_queue=2262, util=96.44% 00:15:15.073 16:26:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.073 16:26:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:15.332 16:26:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.333 16:26:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:15.592 16:26:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.592 16:26:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:15.592 16:26:24 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.592 16:26:24 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:15.852 16:26:24 -- target/fio.sh@69 -- # fio_status=0 00:15:15.852 16:26:24 -- target/fio.sh@70 -- # wait 465361 00:15:15.852 16:26:24 -- target/fio.sh@70 -- # fio_status=4 00:15:15.852 16:26:24 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:19.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.140 16:26:27 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:19.140 16:26:27 -- common/autotest_common.sh@1205 -- # local i=0 00:15:19.140 16:26:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:19.140 16:26:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.140 16:26:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:19.140 16:26:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:19.140 16:26:28 -- common/autotest_common.sh@1217 -- # return 0 00:15:19.140 16:26:28 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:19.140 16:26:28 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:19.140 nvmf hotplug test: fio failed as expected 00:15:19.140 16:26:28 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.400 16:26:28 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:19.400 16:26:28 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:19.400 16:26:28 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:19.400 16:26:28 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:19.400 16:26:28 -- target/fio.sh@91 -- # nvmftestfini 00:15:19.400 16:26:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:19.400 16:26:28 -- nvmf/common.sh@117 -- # sync 00:15:19.400 16:26:28 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:19.400 16:26:28 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:19.400 16:26:28 -- nvmf/common.sh@120 -- # set +e 00:15:19.400 16:26:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:19.400 16:26:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:19.400 rmmod nvme_rdma 00:15:19.400 rmmod nvme_fabrics 00:15:19.400 16:26:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:19.400 16:26:28 -- nvmf/common.sh@124 -- # set -e 00:15:19.400 16:26:28 -- nvmf/common.sh@125 -- # return 0 00:15:19.400 16:26:28 -- nvmf/common.sh@478 -- # '[' -n 462875 ']' 00:15:19.400 16:26:28 -- nvmf/common.sh@479 -- # killprocess 462875 00:15:19.400 16:26:28 -- common/autotest_common.sh@936 -- # '[' -z 462875 ']' 00:15:19.400 16:26:28 -- common/autotest_common.sh@940 -- # kill -0 462875 00:15:19.400 16:26:28 -- common/autotest_common.sh@941 -- # uname 00:15:19.400 16:26:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:19.400 16:26:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 462875 00:15:19.400 16:26:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:19.400 16:26:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:19.400 16:26:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 462875' 00:15:19.400 killing process with pid 462875 00:15:19.400 16:26:28 -- common/autotest_common.sh@955 -- # kill 462875 00:15:19.400 16:26:28 -- common/autotest_common.sh@960 -- # wait 462875 00:15:19.400 [2024-04-26 16:26:28.413689] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:15:19.660 16:26:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:19.660 16:26:28 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:19.660 00:15:19.660 real 0m28.721s 00:15:19.660 user 1m48.438s 00:15:19.660 sys 0m10.132s 00:15:19.660 16:26:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.660 16:26:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.660 ************************************ 00:15:19.660 END TEST nvmf_fio_target 00:15:19.660 ************************************ 00:15:19.660 16:26:28 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:15:19.660 16:26:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:19.660 16:26:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.660 16:26:28 -- common/autotest_common.sh@10 -- # set +x 00:15:19.919 ************************************ 00:15:19.919 START TEST nvmf_bdevio 00:15:19.919 ************************************ 00:15:19.919 16:26:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:15:19.919 * Looking for test storage... 00:15:20.179 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:20.179 16:26:28 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.179 16:26:28 -- nvmf/common.sh@7 -- # uname -s 00:15:20.179 16:26:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.179 16:26:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.179 16:26:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.179 16:26:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.179 16:26:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.179 16:26:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.179 16:26:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.179 16:26:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.179 16:26:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.179 16:26:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.179 16:26:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:20.179 16:26:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:15:20.179 16:26:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.179 16:26:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.179 16:26:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.179 16:26:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.179 16:26:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:20.179 16:26:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.179 16:26:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.179 16:26:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.179 16:26:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.179 16:26:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.179 16:26:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.179 16:26:28 -- paths/export.sh@5 -- # export PATH 00:15:20.179 16:26:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.179 16:26:28 -- nvmf/common.sh@47 -- # : 0 00:15:20.179 16:26:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.179 16:26:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.179 16:26:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.179 16:26:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.179 16:26:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.179 16:26:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.179 16:26:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.179 16:26:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.179 16:26:28 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.179 16:26:28 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.179 16:26:28 -- target/bdevio.sh@14 -- # nvmftestinit 00:15:20.179 16:26:28 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:15:20.179 16:26:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.179 16:26:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:20.179 16:26:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:20.179 16:26:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:20.179 16:26:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.179 16:26:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.179 16:26:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.179 16:26:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:20.179 16:26:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:20.179 16:26:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.179 16:26:28 -- common/autotest_common.sh@10 -- # set +x 00:15:26.772 16:26:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:26.772 16:26:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:26.772 16:26:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:26.772 16:26:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:26.772 16:26:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:26.772 16:26:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:26.772 16:26:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:26.772 16:26:34 -- nvmf/common.sh@295 -- # net_devs=() 00:15:26.772 16:26:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:26.772 16:26:34 -- nvmf/common.sh@296 -- # e810=() 00:15:26.772 16:26:34 -- nvmf/common.sh@296 -- # local -ga e810 00:15:26.772 16:26:34 -- nvmf/common.sh@297 -- # x722=() 00:15:26.772 16:26:34 -- nvmf/common.sh@297 -- # local -ga x722 00:15:26.772 16:26:34 -- nvmf/common.sh@298 -- # mlx=() 00:15:26.772 16:26:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:26.772 16:26:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.772 16:26:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:26.772 16:26:34 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:26.772 16:26:34 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:26.772 16:26:34 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:26.772 16:26:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:26.772 16:26:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:15:26.772 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:15:26.772 16:26:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:26.772 16:26:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:15:26.772 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:15:26.772 16:26:34 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:26.772 16:26:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:26.772 16:26:34 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.772 16:26:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:26.772 16:26:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.772 16:26:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:26.772 Found net devices under 0000:18:00.0: mlx_0_0 00:15:26.772 16:26:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.772 16:26:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.772 16:26:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:26.772 16:26:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.772 16:26:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:26.772 Found net devices under 0000:18:00.1: mlx_0_1 00:15:26.772 16:26:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.772 16:26:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:26.772 16:26:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:26.772 16:26:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@409 -- # rdma_device_init 00:15:26.772 16:26:34 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:15:26.772 16:26:34 -- nvmf/common.sh@58 -- # uname 00:15:26.772 16:26:34 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:26.772 16:26:34 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:26.772 16:26:34 -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:26.772 16:26:34 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:26.772 16:26:34 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:26.772 16:26:34 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:26.772 16:26:34 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:26.772 16:26:34 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:26.772 16:26:34 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:15:26.772 16:26:34 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:26.772 16:26:34 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:26.772 16:26:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:26.772 16:26:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:26.772 16:26:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:26.772 16:26:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:26.772 16:26:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:26.772 16:26:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:26.772 16:26:34 -- nvmf/common.sh@105 -- # continue 2 00:15:26.772 16:26:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.772 16:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:26.772 16:26:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:26.772 16:26:34 -- nvmf/common.sh@105 -- # continue 2 00:15:26.772 16:26:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:26.772 16:26:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:26.772 16:26:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:26.772 16:26:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:26.772 16:26:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:26.773 16:26:34 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:26.773 16:26:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:26.773 16:26:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:26.773 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:26.773 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:15:26.773 altname enp24s0f0np0 00:15:26.773 altname ens785f0np0 00:15:26.773 inet 192.168.100.8/24 scope global mlx_0_0 00:15:26.773 valid_lft forever preferred_lft forever 00:15:26.773 16:26:34 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:26.773 16:26:34 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:26.773 16:26:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:26.773 16:26:34 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:26.773 16:26:34 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:26.773 16:26:34 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:26.773 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:26.773 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:15:26.773 altname enp24s0f1np1 00:15:26.773 altname ens785f1np1 00:15:26.773 inet 192.168.100.9/24 scope global mlx_0_1 00:15:26.773 valid_lft forever preferred_lft forever 00:15:26.773 16:26:34 -- nvmf/common.sh@411 -- # return 0 00:15:26.773 16:26:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:26.773 16:26:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:26.773 16:26:34 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:15:26.773 16:26:34 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:15:26.773 16:26:34 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:26.773 16:26:34 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:26.773 16:26:34 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:26.773 16:26:34 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:26.773 16:26:34 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:26.773 16:26:34 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:26.773 16:26:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:26.773 16:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.773 16:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:26.773 16:26:34 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:26.773 16:26:34 -- nvmf/common.sh@105 -- # continue 2 00:15:26.773 16:26:34 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:26.773 16:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.773 16:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:26.773 16:26:34 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:26.773 16:26:34 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:26.773 16:26:34 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:26.773 16:26:34 -- nvmf/common.sh@105 -- # continue 2 00:15:26.773 16:26:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:26.773 16:26:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:26.773 16:26:34 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:26.773 16:26:34 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:26.773 16:26:34 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:26.773 16:26:34 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:26.773 16:26:34 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:26.773 16:26:34 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:15:26.773 192.168.100.9' 00:15:26.773 16:26:34 -- nvmf/common.sh@446 -- # head -n 1 00:15:26.773 16:26:34 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:26.773 192.168.100.9' 00:15:26.773 16:26:34 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:26.773 16:26:34 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:15:26.773 192.168.100.9' 00:15:26.773 16:26:34 -- nvmf/common.sh@447 -- # head -n 1 00:15:26.773 16:26:34 -- nvmf/common.sh@447 -- # tail -n +2 00:15:26.773 16:26:34 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:26.773 16:26:34 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:15:26.773 16:26:34 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:26.773 16:26:34 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:15:26.773 16:26:34 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:15:26.773 16:26:34 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:15:26.773 16:26:34 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:26.773 16:26:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:26.773 16:26:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:26.773 16:26:34 -- common/autotest_common.sh@10 -- # set +x 00:15:26.773 16:26:34 -- nvmf/common.sh@470 -- # nvmfpid=469392 00:15:26.773 16:26:34 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:26.773 16:26:34 -- nvmf/common.sh@471 -- # waitforlisten 469392 00:15:26.773 16:26:34 -- common/autotest_common.sh@817 -- # '[' -z 469392 ']' 00:15:26.773 16:26:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.773 16:26:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:26.773 16:26:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.773 16:26:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:26.773 16:26:34 -- common/autotest_common.sh@10 -- # set +x 00:15:26.773 [2024-04-26 16:26:35.033999] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:15:26.773 [2024-04-26 16:26:35.034057] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.773 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.773 [2024-04-26 16:26:35.107024] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:26.773 [2024-04-26 16:26:35.192551] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.773 [2024-04-26 16:26:35.192594] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.773 [2024-04-26 16:26:35.192604] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.773 [2024-04-26 16:26:35.192613] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.773 [2024-04-26 16:26:35.192620] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.773 [2024-04-26 16:26:35.192746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:26.773 [2024-04-26 16:26:35.192787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:26.773 [2024-04-26 16:26:35.192885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.773 [2024-04-26 16:26:35.192887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:27.033 16:26:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:27.033 16:26:35 -- common/autotest_common.sh@850 -- # return 0 00:15:27.033 16:26:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:27.033 16:26:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:27.033 16:26:35 -- common/autotest_common.sh@10 -- # set +x 00:15:27.033 16:26:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.033 16:26:35 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:27.033 16:26:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.033 16:26:35 -- common/autotest_common.sh@10 -- # set +x 00:15:27.033 [2024-04-26 16:26:35.913812] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7dabf0/0x7df0e0) succeed. 00:15:27.033 [2024-04-26 16:26:35.924231] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7dc230/0x820770) succeed. 00:15:27.033 16:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.033 16:26:36 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:27.033 16:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.033 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 Malloc0 00:15:27.292 16:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.292 16:26:36 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.292 16:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.292 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 16:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.292 16:26:36 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:27.292 16:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.292 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 16:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.292 16:26:36 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:27.292 16:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.292 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.292 [2024-04-26 16:26:36.082842] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:27.292 16:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.292 16:26:36 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:27.292 16:26:36 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:27.292 16:26:36 -- nvmf/common.sh@521 -- # config=() 00:15:27.292 16:26:36 -- nvmf/common.sh@521 -- # local subsystem config 00:15:27.292 16:26:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:27.292 16:26:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:27.292 { 00:15:27.292 "params": { 00:15:27.292 "name": "Nvme$subsystem", 00:15:27.292 "trtype": "$TEST_TRANSPORT", 00:15:27.292 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:27.292 "adrfam": "ipv4", 00:15:27.292 "trsvcid": "$NVMF_PORT", 00:15:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:27.292 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:27.292 "hdgst": ${hdgst:-false}, 00:15:27.292 "ddgst": ${ddgst:-false} 00:15:27.292 }, 00:15:27.292 "method": "bdev_nvme_attach_controller" 00:15:27.292 } 00:15:27.292 EOF 00:15:27.292 )") 00:15:27.292 16:26:36 -- nvmf/common.sh@543 -- # cat 00:15:27.292 16:26:36 -- nvmf/common.sh@545 -- # jq . 00:15:27.292 16:26:36 -- nvmf/common.sh@546 -- # IFS=, 00:15:27.292 16:26:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:27.292 "params": { 00:15:27.292 "name": "Nvme1", 00:15:27.292 "trtype": "rdma", 00:15:27.292 "traddr": "192.168.100.8", 00:15:27.292 "adrfam": "ipv4", 00:15:27.292 "trsvcid": "4420", 00:15:27.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:27.292 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:27.292 "hdgst": false, 00:15:27.292 "ddgst": false 00:15:27.292 }, 00:15:27.292 "method": "bdev_nvme_attach_controller" 00:15:27.292 }' 00:15:27.292 [2024-04-26 16:26:36.132774] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:15:27.292 [2024-04-26 16:26:36.132828] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469568 ] 00:15:27.292 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.292 [2024-04-26 16:26:36.204680] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:27.292 [2024-04-26 16:26:36.285410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.292 [2024-04-26 16:26:36.285427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.292 [2024-04-26 16:26:36.285429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.552 I/O targets: 00:15:27.552 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:27.552 00:15:27.552 00:15:27.552 CUnit - A unit testing framework for C - Version 2.1-3 00:15:27.552 http://cunit.sourceforge.net/ 00:15:27.552 00:15:27.552 00:15:27.552 Suite: bdevio tests on: Nvme1n1 00:15:27.552 Test: blockdev write read block ...passed 00:15:27.552 Test: blockdev write zeroes read block ...passed 00:15:27.552 Test: blockdev write zeroes read no split ...passed 00:15:27.552 Test: blockdev write zeroes read split ...passed 00:15:27.552 Test: blockdev write zeroes read split partial ...passed 00:15:27.552 Test: blockdev reset ...[2024-04-26 16:26:36.501610] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:27.552 [2024-04-26 16:26:36.524508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:15:27.552 [2024-04-26 16:26:36.551516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:27.552 passed 00:15:27.552 Test: blockdev write read 8 blocks ...passed 00:15:27.552 Test: blockdev write read size > 128k ...passed 00:15:27.552 Test: blockdev write read invalid size ...passed 00:15:27.552 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:27.552 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:27.552 Test: blockdev write read max offset ...passed 00:15:27.552 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:27.552 Test: blockdev writev readv 8 blocks ...passed 00:15:27.552 Test: blockdev writev readv 30 x 1block ...passed 00:15:27.552 Test: blockdev writev readv block ...passed 00:15:27.552 Test: blockdev writev readv size > 128k ...passed 00:15:27.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:27.552 Test: blockdev comparev and writev ...[2024-04-26 16:26:36.554583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.552 [2024-04-26 16:26:36.554614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.554627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.552 [2024-04-26 16:26:36.554637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.554811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.552 [2024-04-26 16:26:36.554823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.554834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.552 [2024-04-26 16:26:36.554847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.555001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.552 [2024-04-26 16:26:36.555012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.555023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.552 [2024-04-26 16:26:36.555032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.555209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.552 [2024-04-26 16:26:36.555219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.555230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:27.552 [2024-04-26 16:26:36.555239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:27.552 passed 00:15:27.552 Test: blockdev nvme passthru rw ...passed 00:15:27.552 Test: blockdev nvme passthru vendor specific ...[2024-04-26 16:26:36.555507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:27.552 [2024-04-26 16:26:36.555520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.555568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:27.552 [2024-04-26 16:26:36.555578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.555630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:27.552 [2024-04-26 16:26:36.555640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:27.552 [2024-04-26 16:26:36.555684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:15:27.552 [2024-04-26 16:26:36.555694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:27.552 passed 00:15:27.553 Test: blockdev nvme admin passthru ...passed 00:15:27.553 Test: blockdev copy ...passed 00:15:27.553 00:15:27.553 Run Summary: Type Total Ran Passed Failed Inactive 00:15:27.553 suites 1 1 n/a 0 0 00:15:27.553 tests 23 23 23 0 0 00:15:27.553 asserts 152 152 152 0 n/a 00:15:27.553 00:15:27.553 Elapsed time = 0.178 seconds 00:15:27.812 16:26:36 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.812 16:26:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.812 16:26:36 -- common/autotest_common.sh@10 -- # set +x 00:15:27.812 16:26:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.812 16:26:36 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:27.812 16:26:36 -- target/bdevio.sh@30 -- # nvmftestfini 00:15:27.812 16:26:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:27.812 16:26:36 -- nvmf/common.sh@117 -- # sync 00:15:27.812 16:26:36 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:27.812 16:26:36 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:27.812 16:26:36 -- nvmf/common.sh@120 -- # set +e 00:15:27.812 16:26:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.812 16:26:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:27.812 rmmod nvme_rdma 00:15:27.812 rmmod nvme_fabrics 00:15:28.071 16:26:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:28.071 16:26:36 -- nvmf/common.sh@124 -- # set -e 00:15:28.071 16:26:36 -- nvmf/common.sh@125 -- # return 0 00:15:28.071 16:26:36 -- nvmf/common.sh@478 -- # '[' -n 469392 ']' 00:15:28.071 16:26:36 -- nvmf/common.sh@479 -- # killprocess 469392 00:15:28.071 16:26:36 -- common/autotest_common.sh@936 -- # '[' -z 469392 ']' 00:15:28.071 16:26:36 -- common/autotest_common.sh@940 -- # kill -0 469392 00:15:28.071 16:26:36 -- common/autotest_common.sh@941 -- # uname 00:15:28.071 16:26:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:28.071 16:26:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 469392 00:15:28.071 16:26:36 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:15:28.071 16:26:36 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:15:28.071 16:26:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 469392' 00:15:28.071 killing process with pid 469392 00:15:28.071 16:26:36 -- common/autotest_common.sh@955 -- # kill 469392 00:15:28.071 16:26:36 -- common/autotest_common.sh@960 -- # wait 469392 00:15:28.071 [2024-04-26 16:26:37.003690] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:15:28.329 16:26:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:28.329 16:26:37 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:15:28.329 00:15:28.329 real 0m8.415s 00:15:28.329 user 0m10.851s 00:15:28.329 sys 0m5.288s 00:15:28.329 16:26:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:28.329 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:15:28.329 ************************************ 00:15:28.329 END TEST nvmf_bdevio 00:15:28.329 ************************************ 00:15:28.329 16:26:37 -- nvmf/nvmf.sh@58 -- # '[' rdma = tcp ']' 00:15:28.329 16:26:37 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:15:28.329 16:26:37 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:15:28.329 16:26:37 -- nvmf/nvmf.sh@71 -- # '[' rdma = tcp ']' 00:15:28.329 16:26:37 -- nvmf/nvmf.sh@77 -- # [[ rdma == \r\d\m\a ]] 00:15:28.329 16:26:37 -- nvmf/nvmf.sh@78 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:15:28.329 16:26:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:28.329 16:26:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.329 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:15:28.590 ************************************ 00:15:28.590 START TEST nvmf_device_removal 00:15:28.590 ************************************ 00:15:28.590 16:26:37 -- common/autotest_common.sh@1111 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:15:28.590 * Looking for test storage... 00:15:28.590 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:28.590 16:26:37 -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:15:28.590 16:26:37 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:28.590 16:26:37 -- common/autotest_common.sh@34 -- # set -e 00:15:28.590 16:26:37 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:28.590 16:26:37 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:28.590 16:26:37 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:15:28.590 16:26:37 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:28.590 16:26:37 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:15:28.590 16:26:37 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:28.590 16:26:37 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:28.590 16:26:37 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:28.590 16:26:37 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:28.590 16:26:37 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:28.590 16:26:37 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:28.590 16:26:37 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:28.590 16:26:37 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:28.590 16:26:37 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:28.590 16:26:37 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:28.590 16:26:37 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:28.590 16:26:37 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:28.590 16:26:37 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:28.590 16:26:37 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:28.590 16:26:37 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:28.590 16:26:37 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:28.590 16:26:37 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:15:28.590 16:26:37 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:28.590 16:26:37 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:15:28.590 16:26:37 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:15:28.590 16:26:37 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:15:28.590 16:26:37 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:15:28.590 16:26:37 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:28.590 16:26:37 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:15:28.590 16:26:37 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:15:28.590 16:26:37 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:28.590 16:26:37 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:28.590 16:26:37 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:15:28.590 16:26:37 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:15:28.590 16:26:37 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:15:28.590 16:26:37 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:15:28.590 16:26:37 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:15:28.590 16:26:37 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:15:28.590 16:26:37 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:15:28.590 16:26:37 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:15:28.590 16:26:37 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:15:28.590 16:26:37 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:15:28.590 16:26:37 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:15:28.590 16:26:37 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:15:28.590 16:26:37 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:15:28.590 16:26:37 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:15:28.590 16:26:37 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:15:28.591 16:26:37 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:15:28.591 16:26:37 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:28.591 16:26:37 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:15:28.591 16:26:37 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:15:28.591 16:26:37 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:15:28.591 16:26:37 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:28.591 16:26:37 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:15:28.591 16:26:37 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:15:28.591 16:26:37 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:15:28.591 16:26:37 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:15:28.591 16:26:37 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:15:28.591 16:26:37 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:15:28.591 16:26:37 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:15:28.591 16:26:37 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:15:28.591 16:26:37 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:15:28.591 16:26:37 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:15:28.591 16:26:37 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:15:28.591 16:26:37 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:15:28.591 16:26:37 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:15:28.591 16:26:37 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:15:28.591 16:26:37 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:15:28.591 16:26:37 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:15:28.591 16:26:37 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:15:28.591 16:26:37 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:15:28.591 16:26:37 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:15:28.591 16:26:37 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:28.591 16:26:37 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:15:28.591 16:26:37 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:15:28.591 16:26:37 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:15:28.591 16:26:37 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:15:28.591 16:26:37 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:15:28.591 16:26:37 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:15:28.591 16:26:37 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:15:28.591 16:26:37 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:15:28.591 16:26:37 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:15:28.591 16:26:37 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:15:28.591 16:26:37 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:15:28.591 16:26:37 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:28.591 16:26:37 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:15:28.591 16:26:37 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:15:28.591 16:26:37 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:15:28.591 16:26:37 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:15:28.591 16:26:37 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:15:28.591 16:26:37 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:15:28.591 16:26:37 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:15:28.591 16:26:37 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:15:28.591 16:26:37 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:15:28.591 16:26:37 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:15:28.591 16:26:37 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:28.591 16:26:37 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:28.591 16:26:37 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:28.591 16:26:37 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:28.591 16:26:37 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:28.591 16:26:37 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:28.591 16:26:37 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:15:28.591 16:26:37 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:28.591 #define SPDK_CONFIG_H 00:15:28.591 #define SPDK_CONFIG_APPS 1 00:15:28.591 #define SPDK_CONFIG_ARCH native 00:15:28.591 #undef SPDK_CONFIG_ASAN 00:15:28.591 #undef SPDK_CONFIG_AVAHI 00:15:28.591 #undef SPDK_CONFIG_CET 00:15:28.591 #define SPDK_CONFIG_COVERAGE 1 00:15:28.591 #define SPDK_CONFIG_CROSS_PREFIX 00:15:28.591 #undef SPDK_CONFIG_CRYPTO 00:15:28.591 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:28.591 #undef SPDK_CONFIG_CUSTOMOCF 00:15:28.591 #undef SPDK_CONFIG_DAOS 00:15:28.591 #define SPDK_CONFIG_DAOS_DIR 00:15:28.591 #define SPDK_CONFIG_DEBUG 1 00:15:28.591 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:28.591 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:15:28.591 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:28.591 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:28.591 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:28.591 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:15:28.591 #define SPDK_CONFIG_EXAMPLES 1 00:15:28.591 #undef SPDK_CONFIG_FC 00:15:28.591 #define SPDK_CONFIG_FC_PATH 00:15:28.591 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:28.591 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:28.591 #undef SPDK_CONFIG_FUSE 00:15:28.591 #undef SPDK_CONFIG_FUZZER 00:15:28.591 #define SPDK_CONFIG_FUZZER_LIB 00:15:28.591 #undef SPDK_CONFIG_GOLANG 00:15:28.591 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:28.591 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:28.591 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:28.591 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:15:28.591 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:28.591 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:28.591 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:28.591 #define SPDK_CONFIG_IDXD 1 00:15:28.591 #undef SPDK_CONFIG_IDXD_KERNEL 00:15:28.591 #undef SPDK_CONFIG_IPSEC_MB 00:15:28.591 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:28.591 #define SPDK_CONFIG_ISAL 1 00:15:28.591 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:28.591 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:28.591 #define SPDK_CONFIG_LIBDIR 00:15:28.591 #undef SPDK_CONFIG_LTO 00:15:28.591 #define SPDK_CONFIG_MAX_LCORES 00:15:28.591 #define SPDK_CONFIG_NVME_CUSE 1 00:15:28.591 #undef SPDK_CONFIG_OCF 00:15:28.591 #define SPDK_CONFIG_OCF_PATH 00:15:28.591 #define SPDK_CONFIG_OPENSSL_PATH 00:15:28.591 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:28.591 #define SPDK_CONFIG_PGO_DIR 00:15:28.591 #undef SPDK_CONFIG_PGO_USE 00:15:28.591 #define SPDK_CONFIG_PREFIX /usr/local 00:15:28.591 #undef SPDK_CONFIG_RAID5F 00:15:28.591 #undef SPDK_CONFIG_RBD 00:15:28.591 #define SPDK_CONFIG_RDMA 1 00:15:28.591 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:28.591 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:28.591 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:28.591 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:28.591 #define SPDK_CONFIG_SHARED 1 00:15:28.591 #undef SPDK_CONFIG_SMA 00:15:28.591 #define SPDK_CONFIG_TESTS 1 00:15:28.591 #undef SPDK_CONFIG_TSAN 00:15:28.591 #define SPDK_CONFIG_UBLK 1 00:15:28.591 #define SPDK_CONFIG_UBSAN 1 00:15:28.591 #undef SPDK_CONFIG_UNIT_TESTS 00:15:28.591 #undef SPDK_CONFIG_URING 00:15:28.591 #define SPDK_CONFIG_URING_PATH 00:15:28.591 #undef SPDK_CONFIG_URING_ZNS 00:15:28.591 #undef SPDK_CONFIG_USDT 00:15:28.591 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:28.591 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:28.591 #undef SPDK_CONFIG_VFIO_USER 00:15:28.591 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:28.591 #define SPDK_CONFIG_VHOST 1 00:15:28.591 #define SPDK_CONFIG_VIRTIO 1 00:15:28.591 #undef SPDK_CONFIG_VTUNE 00:15:28.591 #define SPDK_CONFIG_VTUNE_DIR 00:15:28.591 #define SPDK_CONFIG_WERROR 1 00:15:28.591 #define SPDK_CONFIG_WPDK_DIR 00:15:28.591 #undef SPDK_CONFIG_XNVME 00:15:28.591 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:28.591 16:26:37 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:28.591 16:26:37 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:28.591 16:26:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.591 16:26:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.591 16:26:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.591 16:26:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.591 16:26:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.591 16:26:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.591 16:26:37 -- paths/export.sh@5 -- # export PATH 00:15:28.591 16:26:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.591 16:26:37 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:15:28.591 16:26:37 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:15:28.591 16:26:37 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:15:28.591 16:26:37 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:15:28.592 16:26:37 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:28.592 16:26:37 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:15:28.592 16:26:37 -- pm/common@67 -- # TEST_TAG=N/A 00:15:28.592 16:26:37 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:15:28.592 16:26:37 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:15:28.592 16:26:37 -- pm/common@71 -- # uname -s 00:15:28.592 16:26:37 -- pm/common@71 -- # PM_OS=Linux 00:15:28.592 16:26:37 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:28.592 16:26:37 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:15:28.592 16:26:37 -- pm/common@76 -- # [[ Linux == Linux ]] 00:15:28.592 16:26:37 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:15:28.592 16:26:37 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:15:28.592 16:26:37 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:28.592 16:26:37 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:28.592 16:26:37 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:15:28.592 16:26:37 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:15:28.592 16:26:37 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:15:28.592 16:26:37 -- common/autotest_common.sh@57 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:15:28.592 16:26:37 -- common/autotest_common.sh@61 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:28.592 16:26:37 -- common/autotest_common.sh@63 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:15:28.592 16:26:37 -- common/autotest_common.sh@65 -- # : 1 00:15:28.592 16:26:37 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:28.592 16:26:37 -- common/autotest_common.sh@67 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:15:28.592 16:26:37 -- common/autotest_common.sh@69 -- # : 00:15:28.592 16:26:37 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:15:28.592 16:26:37 -- common/autotest_common.sh@71 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:15:28.592 16:26:37 -- common/autotest_common.sh@73 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:15:28.592 16:26:37 -- common/autotest_common.sh@75 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:15:28.592 16:26:37 -- common/autotest_common.sh@77 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:28.592 16:26:37 -- common/autotest_common.sh@79 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:15:28.592 16:26:37 -- common/autotest_common.sh@81 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:15:28.592 16:26:37 -- common/autotest_common.sh@83 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:15:28.592 16:26:37 -- common/autotest_common.sh@85 -- # : 1 00:15:28.592 16:26:37 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:15:28.592 16:26:37 -- common/autotest_common.sh@87 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:15:28.592 16:26:37 -- common/autotest_common.sh@89 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:15:28.592 16:26:37 -- common/autotest_common.sh@91 -- # : 1 00:15:28.592 16:26:37 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:15:28.592 16:26:37 -- common/autotest_common.sh@93 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:15:28.592 16:26:37 -- common/autotest_common.sh@95 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:28.592 16:26:37 -- common/autotest_common.sh@97 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:15:28.592 16:26:37 -- common/autotest_common.sh@99 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:15:28.592 16:26:37 -- common/autotest_common.sh@101 -- # : rdma 00:15:28.592 16:26:37 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:28.592 16:26:37 -- common/autotest_common.sh@103 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:15:28.592 16:26:37 -- common/autotest_common.sh@105 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:15:28.592 16:26:37 -- common/autotest_common.sh@107 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:15:28.592 16:26:37 -- common/autotest_common.sh@109 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:15:28.592 16:26:37 -- common/autotest_common.sh@111 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:15:28.592 16:26:37 -- common/autotest_common.sh@113 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:15:28.592 16:26:37 -- common/autotest_common.sh@115 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:15:28.592 16:26:37 -- common/autotest_common.sh@117 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:28.592 16:26:37 -- common/autotest_common.sh@119 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:15:28.592 16:26:37 -- common/autotest_common.sh@121 -- # : 1 00:15:28.592 16:26:37 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:15:28.592 16:26:37 -- common/autotest_common.sh@123 -- # : 00:15:28.592 16:26:37 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:28.592 16:26:37 -- common/autotest_common.sh@125 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:15:28.592 16:26:37 -- common/autotest_common.sh@127 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:15:28.592 16:26:37 -- common/autotest_common.sh@129 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:15:28.592 16:26:37 -- common/autotest_common.sh@131 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:15:28.592 16:26:37 -- common/autotest_common.sh@133 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:15:28.592 16:26:37 -- common/autotest_common.sh@135 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:15:28.592 16:26:37 -- common/autotest_common.sh@137 -- # : 00:15:28.592 16:26:37 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:15:28.592 16:26:37 -- common/autotest_common.sh@139 -- # : true 00:15:28.592 16:26:37 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:15:28.592 16:26:37 -- common/autotest_common.sh@141 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:15:28.592 16:26:37 -- common/autotest_common.sh@143 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:15:28.592 16:26:37 -- common/autotest_common.sh@145 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:15:28.592 16:26:37 -- common/autotest_common.sh@147 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:15:28.592 16:26:37 -- common/autotest_common.sh@149 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:15:28.592 16:26:37 -- common/autotest_common.sh@151 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:15:28.592 16:26:37 -- common/autotest_common.sh@153 -- # : mlx5 00:15:28.592 16:26:37 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:15:28.592 16:26:37 -- common/autotest_common.sh@155 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:15:28.592 16:26:37 -- common/autotest_common.sh@157 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:15:28.592 16:26:37 -- common/autotest_common.sh@159 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:15:28.592 16:26:37 -- common/autotest_common.sh@161 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:15:28.592 16:26:37 -- common/autotest_common.sh@163 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:15:28.592 16:26:37 -- common/autotest_common.sh@166 -- # : 00:15:28.592 16:26:37 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:15:28.592 16:26:37 -- common/autotest_common.sh@168 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:15:28.592 16:26:37 -- common/autotest_common.sh@170 -- # : 0 00:15:28.592 16:26:37 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:28.592 16:26:37 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:15:28.592 16:26:37 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:15:28.592 16:26:37 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:15:28.592 16:26:37 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:15:28.592 16:26:37 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:28.592 16:26:37 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:28.592 16:26:37 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:28.593 16:26:37 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:28.593 16:26:37 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:28.593 16:26:37 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:28.593 16:26:37 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:15:28.593 16:26:37 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:15:28.593 16:26:37 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:28.593 16:26:37 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:15:28.593 16:26:37 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:28.593 16:26:37 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:28.593 16:26:37 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:28.593 16:26:37 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:28.593 16:26:37 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:28.593 16:26:37 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:15:28.593 16:26:37 -- common/autotest_common.sh@199 -- # cat 00:15:28.593 16:26:37 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:15:28.593 16:26:37 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:28.593 16:26:37 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:28.593 16:26:37 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:28.593 16:26:37 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:28.593 16:26:37 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:15:28.593 16:26:37 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:15:28.593 16:26:37 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:15:28.593 16:26:37 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:15:28.593 16:26:37 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:15:28.593 16:26:37 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:15:28.593 16:26:37 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:28.593 16:26:37 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:28.593 16:26:37 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:28.593 16:26:37 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:28.593 16:26:37 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:28.593 16:26:37 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:28.593 16:26:37 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:28.593 16:26:37 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:28.593 16:26:37 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:15:28.593 16:26:37 -- common/autotest_common.sh@252 -- # export valgrind= 00:15:28.593 16:26:37 -- common/autotest_common.sh@252 -- # valgrind= 00:15:28.593 16:26:37 -- common/autotest_common.sh@258 -- # uname -s 00:15:28.593 16:26:37 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:15:28.593 16:26:37 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:15:28.593 16:26:37 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:15:28.593 16:26:37 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:15:28.593 16:26:37 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:15:28.593 16:26:37 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:15:28.593 16:26:37 -- common/autotest_common.sh@268 -- # MAKE=make 00:15:28.593 16:26:37 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j72 00:15:28.593 16:26:37 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:15:28.593 16:26:37 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:15:28.593 16:26:37 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:15:28.593 16:26:37 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:15:28.593 16:26:37 -- common/autotest_common.sh@289 -- # for i in "$@" 00:15:28.593 16:26:37 -- common/autotest_common.sh@290 -- # case "$i" in 00:15:28.593 16:26:37 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=rdma 00:15:28.593 16:26:37 -- common/autotest_common.sh@307 -- # [[ -z 469809 ]] 00:15:28.593 16:26:37 -- common/autotest_common.sh@307 -- # kill -0 469809 00:15:28.593 16:26:37 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:15:28.593 16:26:37 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:15:28.593 16:26:37 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:15:28.593 16:26:37 -- common/autotest_common.sh@320 -- # local mount target_dir 00:15:28.593 16:26:37 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:15:28.593 16:26:37 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:15:28.593 16:26:37 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:15:28.593 16:26:37 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:15:28.593 16:26:37 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.59sHrZ 00:15:28.593 16:26:37 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:28.593 16:26:37 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:15:28.593 16:26:37 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:15:28.593 16:26:37 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.59sHrZ/tests/target /tmp/spdk.59sHrZ 00:15:28.593 16:26:37 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:15:28.593 16:26:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:28.593 16:26:37 -- common/autotest_common.sh@316 -- # df -T 00:15:28.593 16:26:37 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:15:28.593 16:26:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:15:28.593 16:26:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:15:28.593 16:26:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:15:28.593 16:26:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:15:28.593 16:26:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:15:28.593 16:26:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:28.593 16:26:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:15:28.593 16:26:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:15:28.593 16:26:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=56426172416 00:15:28.593 16:26:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67027496960 00:15:28.593 16:26:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=10601324544 00:15:28.593 16:26:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:28.593 16:26:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:28.593 16:26:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:28.593 16:26:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=33499115520 00:15:28.593 16:26:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=33513746432 00:15:28.593 16:26:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=14630912 00:15:28.593 16:26:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:28.593 16:26:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:28.593 16:26:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:28.593 16:26:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=13382656000 00:15:28.593 16:26:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=13405499392 00:15:28.853 16:26:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=22843392 00:15:28.853 16:26:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:28.853 16:26:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:28.853 16:26:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:28.853 16:26:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=33513439232 00:15:28.853 16:26:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=33513750528 00:15:28.853 16:26:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=311296 00:15:28.853 16:26:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:28.853 16:26:37 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:28.853 16:26:37 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:28.853 16:26:37 -- common/autotest_common.sh@351 -- # avails["$mount"]=6702743552 00:15:28.853 16:26:37 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6702747648 00:15:28.853 16:26:37 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:15:28.853 16:26:37 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:28.853 16:26:37 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:15:28.853 * Looking for test storage... 00:15:28.853 16:26:37 -- common/autotest_common.sh@357 -- # local target_space new_size 00:15:28.853 16:26:37 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:15:28.853 16:26:37 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:28.853 16:26:37 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:28.853 16:26:37 -- common/autotest_common.sh@361 -- # mount=/ 00:15:28.853 16:26:37 -- common/autotest_common.sh@363 -- # target_space=56426172416 00:15:28.853 16:26:37 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:15:28.853 16:26:37 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:15:28.853 16:26:37 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:15:28.853 16:26:37 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:15:28.853 16:26:37 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:15:28.853 16:26:37 -- common/autotest_common.sh@370 -- # new_size=12815917056 00:15:28.853 16:26:37 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:28.853 16:26:37 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:28.853 16:26:37 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:28.853 16:26:37 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:28.853 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:28.853 16:26:37 -- common/autotest_common.sh@378 -- # return 0 00:15:28.853 16:26:37 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:15:28.853 16:26:37 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:15:28.853 16:26:37 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:28.853 16:26:37 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:28.853 16:26:37 -- common/autotest_common.sh@1673 -- # true 00:15:28.853 16:26:37 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:15:28.853 16:26:37 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:15:28.853 16:26:37 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:15:28.853 16:26:37 -- common/autotest_common.sh@27 -- # exec 00:15:28.853 16:26:37 -- common/autotest_common.sh@29 -- # exec 00:15:28.853 16:26:37 -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:28.853 16:26:37 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:28.853 16:26:37 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:28.853 16:26:37 -- common/autotest_common.sh@18 -- # set -x 00:15:28.853 16:26:37 -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.853 16:26:37 -- nvmf/common.sh@7 -- # uname -s 00:15:28.853 16:26:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.853 16:26:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.853 16:26:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.853 16:26:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.853 16:26:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.853 16:26:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.853 16:26:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.853 16:26:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.853 16:26:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.853 16:26:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.853 16:26:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:28.853 16:26:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:15:28.853 16:26:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.853 16:26:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.853 16:26:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.853 16:26:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.853 16:26:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:28.853 16:26:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.854 16:26:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.854 16:26:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.854 16:26:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.854 16:26:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.854 16:26:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.854 16:26:37 -- paths/export.sh@5 -- # export PATH 00:15:28.854 16:26:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.854 16:26:37 -- nvmf/common.sh@47 -- # : 0 00:15:28.854 16:26:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.854 16:26:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.854 16:26:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.854 16:26:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.854 16:26:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.854 16:26:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.854 16:26:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.854 16:26:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.854 16:26:37 -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:15:28.854 16:26:37 -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:15:28.854 16:26:37 -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:28.854 16:26:37 -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:15:28.854 16:26:37 -- target/device_removal.sh@18 -- # nvmftestinit 00:15:28.854 16:26:37 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:15:28.854 16:26:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.854 16:26:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:28.854 16:26:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:28.854 16:26:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:28.854 16:26:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.854 16:26:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.854 16:26:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.854 16:26:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:28.854 16:26:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:28.854 16:26:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.854 16:26:37 -- common/autotest_common.sh@10 -- # set +x 00:15:35.426 16:26:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:35.426 16:26:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.426 16:26:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.426 16:26:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.426 16:26:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.426 16:26:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.426 16:26:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.426 16:26:43 -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.426 16:26:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.426 16:26:43 -- nvmf/common.sh@296 -- # e810=() 00:15:35.426 16:26:43 -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.426 16:26:43 -- nvmf/common.sh@297 -- # x722=() 00:15:35.426 16:26:43 -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.426 16:26:43 -- nvmf/common.sh@298 -- # mlx=() 00:15:35.426 16:26:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.426 16:26:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.426 16:26:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.426 16:26:44 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:35.426 16:26:44 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:35.426 16:26:44 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:35.426 16:26:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.426 16:26:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:15:35.426 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:15:35.426 16:26:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:35.426 16:26:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:15:35.426 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:15:35.426 16:26:44 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:35.426 16:26:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.426 16:26:44 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.426 16:26:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.426 16:26:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.426 16:26:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:35.426 Found net devices under 0000:18:00.0: mlx_0_0 00:15:35.426 16:26:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.426 16:26:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.426 16:26:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:35.426 16:26:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.426 16:26:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:35.426 Found net devices under 0000:18:00.1: mlx_0_1 00:15:35.426 16:26:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.426 16:26:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:35.426 16:26:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:35.426 16:26:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@409 -- # rdma_device_init 00:15:35.426 16:26:44 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:15:35.426 16:26:44 -- nvmf/common.sh@58 -- # uname 00:15:35.426 16:26:44 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:35.426 16:26:44 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:35.426 16:26:44 -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:35.426 16:26:44 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:35.426 16:26:44 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:35.426 16:26:44 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:35.426 16:26:44 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:35.426 16:26:44 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:35.426 16:26:44 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:15:35.426 16:26:44 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:35.426 16:26:44 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:35.426 16:26:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:35.426 16:26:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:35.426 16:26:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:35.426 16:26:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:35.426 16:26:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:35.426 16:26:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:35.426 16:26:44 -- nvmf/common.sh@105 -- # continue 2 00:15:35.426 16:26:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.426 16:26:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:35.426 16:26:44 -- nvmf/common.sh@105 -- # continue 2 00:15:35.426 16:26:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:35.426 16:26:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:35.426 16:26:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:35.426 16:26:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:35.426 16:26:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:35.426 16:26:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:35.426 16:26:44 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:35.426 16:26:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:35.426 16:26:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:35.426 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:35.426 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:15:35.426 altname enp24s0f0np0 00:15:35.426 altname ens785f0np0 00:15:35.426 inet 192.168.100.8/24 scope global mlx_0_0 00:15:35.426 valid_lft forever preferred_lft forever 00:15:35.426 16:26:44 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:35.426 16:26:44 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:35.426 16:26:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:35.426 16:26:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:35.427 16:26:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:35.427 16:26:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:35.427 16:26:44 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:35.427 16:26:44 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:35.427 16:26:44 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:35.427 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:35.427 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:15:35.427 altname enp24s0f1np1 00:15:35.427 altname ens785f1np1 00:15:35.427 inet 192.168.100.9/24 scope global mlx_0_1 00:15:35.427 valid_lft forever preferred_lft forever 00:15:35.427 16:26:44 -- nvmf/common.sh@411 -- # return 0 00:15:35.427 16:26:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:35.427 16:26:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:35.427 16:26:44 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:15:35.427 16:26:44 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:15:35.427 16:26:44 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:35.427 16:26:44 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:35.427 16:26:44 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:35.427 16:26:44 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:35.427 16:26:44 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:35.427 16:26:44 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:35.427 16:26:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:35.427 16:26:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.427 16:26:44 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:35.427 16:26:44 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:35.427 16:26:44 -- nvmf/common.sh@105 -- # continue 2 00:15:35.427 16:26:44 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:35.427 16:26:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.427 16:26:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:35.427 16:26:44 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:35.427 16:26:44 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:35.427 16:26:44 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:35.427 16:26:44 -- nvmf/common.sh@105 -- # continue 2 00:15:35.427 16:26:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:35.427 16:26:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:35.427 16:26:44 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:35.427 16:26:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:35.427 16:26:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:35.427 16:26:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:35.427 16:26:44 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:35.427 16:26:44 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:35.427 16:26:44 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:35.427 16:26:44 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:35.427 16:26:44 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:35.427 16:26:44 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:35.427 16:26:44 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:15:35.427 192.168.100.9' 00:15:35.427 16:26:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:35.427 192.168.100.9' 00:15:35.427 16:26:44 -- nvmf/common.sh@446 -- # head -n 1 00:15:35.427 16:26:44 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:35.427 16:26:44 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:15:35.427 192.168.100.9' 00:15:35.427 16:26:44 -- nvmf/common.sh@447 -- # head -n 1 00:15:35.427 16:26:44 -- nvmf/common.sh@447 -- # tail -n +2 00:15:35.427 16:26:44 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:35.427 16:26:44 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:15:35.427 16:26:44 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:35.427 16:26:44 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:15:35.427 16:26:44 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:15:35.427 16:26:44 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:15:35.427 16:26:44 -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:15:35.427 16:26:44 -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:15:35.427 16:26:44 -- target/device_removal.sh@237 -- # BOND_MASK=24 00:15:35.427 16:26:44 -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:15:35.427 16:26:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:35.427 16:26:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.427 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:35.427 ************************************ 00:15:35.427 START TEST nvmf_device_removal_pci_remove_no_srq 00:15:35.427 ************************************ 00:15:35.427 16:26:44 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan --no-srq 00:15:35.427 16:26:44 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:15:35.427 16:26:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:35.427 16:26:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:35.427 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:35.427 16:26:44 -- nvmf/common.sh@470 -- # nvmfpid=472742 00:15:35.427 16:26:44 -- nvmf/common.sh@471 -- # waitforlisten 472742 00:15:35.427 16:26:44 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:35.427 16:26:44 -- common/autotest_common.sh@817 -- # '[' -z 472742 ']' 00:15:35.427 16:26:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.427 16:26:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:35.427 16:26:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.427 16:26:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:35.687 16:26:44 -- common/autotest_common.sh@10 -- # set +x 00:15:35.687 [2024-04-26 16:26:44.495469] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:15:35.687 [2024-04-26 16:26:44.495528] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.687 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.687 [2024-04-26 16:26:44.569428] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:35.687 [2024-04-26 16:26:44.653676] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.687 [2024-04-26 16:26:44.653724] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.687 [2024-04-26 16:26:44.653734] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.687 [2024-04-26 16:26:44.653758] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.687 [2024-04-26 16:26:44.653766] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.687 [2024-04-26 16:26:44.653818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.687 [2024-04-26 16:26:44.653821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.624 16:26:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:36.624 16:26:45 -- common/autotest_common.sh@850 -- # return 0 00:15:36.624 16:26:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:36.624 16:26:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:36.624 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.624 16:26:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.624 16:26:45 -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:15:36.624 16:26:45 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:15:36.624 16:26:45 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:15:36.624 16:26:45 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:15:36.624 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.624 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.624 [2024-04-26 16:26:45.359358] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10f9c90/0x10fe180) succeed. 00:15:36.624 [2024-04-26 16:26:45.368330] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10fb190/0x113f810) succeed. 00:15:36.624 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.624 16:26:45 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:15:36.624 16:26:45 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:36.624 16:26:45 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:36.624 16:26:45 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:36.624 16:26:45 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:36.624 16:26:45 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:36.624 16:26:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.624 16:26:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.624 16:26:45 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:36.624 16:26:45 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:36.624 16:26:45 -- nvmf/common.sh@105 -- # continue 2 00:15:36.624 16:26:45 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:36.624 16:26:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.624 16:26:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:36.624 16:26:45 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:36.624 16:26:45 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:36.624 16:26:45 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:36.624 16:26:45 -- nvmf/common.sh@105 -- # continue 2 00:15:36.624 16:26:45 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:15:36.624 16:26:45 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:15:36.624 16:26:45 -- target/device_removal.sh@25 -- # local -a dev_name 00:15:36.624 16:26:45 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:15:36.624 16:26:45 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:15:36.624 16:26:45 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:15:36.624 16:26:45 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:15:36.624 16:26:45 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:15:36.624 16:26:45 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:15:36.624 16:26:45 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:36.624 16:26:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:36.624 16:26:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.624 16:26:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.624 16:26:45 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:15:36.624 16:26:45 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:15:36.624 16:26:45 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:15:36.624 16:26:45 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:15:36.624 16:26:45 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:15:36.624 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.624 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.624 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.624 16:26:45 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:15:36.624 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.624 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.624 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.624 16:26:45 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:15:36.625 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.625 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.625 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.625 16:26:45 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:15:36.625 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.625 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.625 [2024-04-26 16:26:45.494832] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:36.625 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.625 16:26:45 -- target/device_removal.sh@41 -- # return 0 00:15:36.625 16:26:45 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:15:36.625 16:26:45 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:15:36.625 16:26:45 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@25 -- # local -a dev_name 00:15:36.625 16:26:45 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:15:36.625 16:26:45 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:36.625 16:26:45 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:36.625 16:26:45 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:36.625 16:26:45 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:36.625 16:26:45 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:15:36.625 16:26:45 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:15:36.625 16:26:45 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:15:36.625 16:26:45 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:15:36.625 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.625 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.625 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.625 16:26:45 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:15:36.625 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.625 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.625 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.625 16:26:45 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:15:36.625 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.625 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.625 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.625 16:26:45 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:15:36.625 16:26:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.625 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:36.625 [2024-04-26 16:26:45.578312] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:15:36.625 16:26:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.625 16:26:45 -- target/device_removal.sh@41 -- # return 0 00:15:36.625 16:26:45 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@53 -- # return 0 00:15:36.625 16:26:45 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:15:36.625 16:26:45 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:15:36.625 16:26:45 -- target/device_removal.sh@87 -- # local dev_names 00:15:36.625 16:26:45 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:36.625 16:26:45 -- target/device_removal.sh@91 -- # bdevperf_pid=472947 00:15:36.625 16:26:45 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:36.625 16:26:45 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:36.625 16:26:45 -- target/device_removal.sh@94 -- # waitforlisten 472947 /var/tmp/bdevperf.sock 00:15:36.625 16:26:45 -- common/autotest_common.sh@817 -- # '[' -z 472947 ']' 00:15:36.625 16:26:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:36.625 16:26:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.625 16:26:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:36.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:36.625 16:26:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.625 16:26:45 -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 16:26:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.563 16:26:46 -- common/autotest_common.sh@850 -- # return 0 00:15:37.563 16:26:46 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:37.563 16:26:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.563 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 16:26:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.563 16:26:46 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:15:37.563 16:26:46 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:15:37.563 16:26:46 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:15:37.563 16:26:46 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:15:37.563 16:26:46 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:15:37.563 16:26:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:37.563 16:26:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:37.563 16:26:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:37.563 16:26:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:37.563 16:26:46 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:15:37.563 16:26:46 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:15:37.563 16:26:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.563 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:15:37.563 Nvme_mlx_0_0n1 00:15:37.563 16:26:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.563 16:26:46 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:15:37.563 16:26:46 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:15:37.563 16:26:46 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:15:37.564 16:26:46 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:15:37.564 16:26:46 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:15:37.564 16:26:46 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:37.564 16:26:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:37.564 16:26:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:37.564 16:26:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:37.564 16:26:46 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:15:37.564 16:26:46 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:15:37.564 16:26:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.564 16:26:46 -- common/autotest_common.sh@10 -- # set +x 00:15:37.823 Nvme_mlx_0_1n1 00:15:37.823 16:26:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.823 16:26:46 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=473139 00:15:37.823 16:26:46 -- target/device_removal.sh@112 -- # sleep 5 00:15:37.823 16:26:46 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:43.101 16:26:51 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:15:43.101 16:26:51 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:15:43.101 16:26:51 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:15:43.101 16:26:51 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:15:43.101 16:26:51 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:15:43.101 16:26:51 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:43.101 16:26:51 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:43.101 16:26:51 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:43.101 16:26:51 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:43.101 16:26:51 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:15:43.101 16:26:51 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:15:43.101 16:26:51 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:15:43.101 16:26:51 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:15:43.101 16:26:51 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:15:43.101 16:26:51 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:43.101 16:26:51 -- target/device_removal.sh@77 -- # grep mlx5_0 00:15:43.101 16:26:51 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:43.101 16:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:43.101 16:26:51 -- common/autotest_common.sh@10 -- # set +x 00:15:43.101 16:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:43.101 mlx5_0 00:15:43.101 16:26:51 -- target/device_removal.sh@78 -- # return 0 00:15:43.101 16:26:51 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@67 -- # echo 1 00:15:43.101 16:26:51 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:15:43.101 16:26:51 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:15:43.101 [2024-04-26 16:26:51.817833] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:15:43.101 [2024-04-26 16:26:51.818151] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:43.101 [2024-04-26 16:26:51.819165] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:43.101 [2024-04-26 16:26:51.819184] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 100 00:15:43.101 [2024-04-26 16:26:51.819193] rdma.c: 632:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:15:43.101 [2024-04-26 16:26:51.819202] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819210] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819218] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819226] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819233] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819240] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819248] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819256] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819263] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819271] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819279] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819286] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819294] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819301] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819309] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819316] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819323] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819331] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819339] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819351] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819360] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819367] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819375] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819383] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819390] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819398] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819406] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819413] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819421] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819429] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819437] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819445] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819452] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819460] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819474] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819481] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819488] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819496] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819503] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819510] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819517] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819525] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819532] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819539] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819547] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819554] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819561] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819568] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819575] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819583] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819590] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.101 [2024-04-26 16:26:51.819597] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.101 [2024-04-26 16:26:51.819606] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819613] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.101 [2024-04-26 16:26:51.819620] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.101 [2024-04-26 16:26:51.819628] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819635] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819642] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819650] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819657] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819664] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819671] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819679] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819686] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819694] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819703] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819712] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819720] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819727] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819734] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819742] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819749] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819756] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819764] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819771] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819780] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819787] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819795] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819802] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819809] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819816] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819824] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819831] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819838] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819845] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819853] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819860] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819867] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819874] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819882] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819890] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819897] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819904] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819911] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819919] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819926] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819933] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.819941] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.819950] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819957] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819964] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819971] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819979] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.819986] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.819993] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820000] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820008] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820015] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820022] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820029] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820037] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820044] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820051] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820058] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820066] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820073] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820080] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820090] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820097] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820104] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820112] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820119] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820126] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820133] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820140] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820147] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820155] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820162] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820169] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820177] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820185] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820192] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820200] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820207] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820215] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820222] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820229] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820236] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820244] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820251] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820258] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820266] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820274] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820281] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820288] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820295] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820303] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820310] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820317] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820325] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820332] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820339] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820351] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820358] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820368] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820375] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820383] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820390] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820397] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820406] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820413] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820420] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820429] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820437] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820444] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820451] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820458] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820465] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820473] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.102 [2024-04-26 16:26:51.820480] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.102 [2024-04-26 16:26:51.820487] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820495] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.102 [2024-04-26 16:26:51.820502] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.102 [2024-04-26 16:26:51.820509] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.103 [2024-04-26 16:26:51.820516] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.103 [2024-04-26 16:26:51.820524] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.103 [2024-04-26 16:26:51.820531] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.103 [2024-04-26 16:26:51.820538] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.103 [2024-04-26 16:26:51.820546] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.103 [2024-04-26 16:26:51.820553] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.103 [2024-04-26 16:26:51.820561] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.103 [2024-04-26 16:26:51.820568] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.103 [2024-04-26 16:26:51.820575] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.103 [2024-04-26 16:26:51.820583] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.103 [2024-04-26 16:26:51.820590] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.103 [2024-04-26 16:26:51.820597] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.103 [2024-04-26 16:26:51.820605] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.103 [2024-04-26 16:26:51.820613] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.103 [2024-04-26 16:26:51.820620] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.103 [2024-04-26 16:26:51.820627] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.103 [2024-04-26 16:26:51.820634] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.103 [2024-04-26 16:26:51.820641] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.103 [2024-04-26 16:26:51.820649] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.103 [2024-04-26 16:26:51.820656] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.103 [2024-04-26 16:26:51.820665] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.103 [2024-04-26 16:26:51.820672] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:43.103 [2024-04-26 16:26:51.820682] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:15:43.103 [2024-04-26 16:26:51.820689] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:43.103 [2024-04-26 16:26:51.820698] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:43.103 [2024-04-26 16:26:51.820706] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:46.406 16:26:55 -- target/device_removal.sh@147 -- # seq 1 10 00:15:46.406 16:26:55 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:15:46.406 16:26:55 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:15:46.406 16:26:55 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:15:46.406 16:26:55 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:46.406 16:26:55 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:46.406 16:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.406 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:15:46.406 16:26:55 -- target/device_removal.sh@77 -- # grep mlx5_0 00:15:46.407 16:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.407 16:26:55 -- target/device_removal.sh@78 -- # return 1 00:15:46.407 16:26:55 -- target/device_removal.sh@149 -- # break 00:15:46.407 16:26:55 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:46.407 16:26:55 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:46.407 16:26:55 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:46.407 16:26:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:46.407 16:26:55 -- common/autotest_common.sh@10 -- # set +x 00:15:46.407 16:26:55 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:46.407 16:26:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:46.407 16:26:55 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:15:46.407 16:26:55 -- target/device_removal.sh@160 -- # rescan_pci 00:15:46.407 16:26:55 -- target/device_removal.sh@57 -- # echo 1 00:15:47.345 [2024-04-26 16:26:56.159308] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10ed330/0x10fe180) succeed. 00:15:47.345 [2024-04-26 16:26:56.159386] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:15:47.345 16:26:56 -- target/device_removal.sh@162 -- # seq 1 10 00:15:47.345 16:26:56 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:15:47.345 16:26:56 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:15:47.345 16:26:56 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:15:47.345 16:26:56 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:15:47.345 16:26:56 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:15:47.345 16:26:56 -- target/device_removal.sh@171 -- # break 00:15:47.345 16:26:56 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:15:47.345 16:26:56 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:15:47.913 16:26:56 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:15:47.913 16:26:56 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:47.913 16:26:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:47.913 16:26:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:47.913 16:26:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:47.913 16:26:56 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:15:47.913 16:26:56 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:15:47.913 16:26:56 -- target/device_removal.sh@186 -- # seq 1 10 00:15:47.913 16:26:56 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:15:47.913 16:26:56 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:47.913 16:26:56 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:47.913 16:26:56 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:47.913 16:26:56 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:47.913 16:26:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.913 16:26:56 -- common/autotest_common.sh@10 -- # set +x 00:15:47.913 [2024-04-26 16:26:56.858870] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:47.913 [2024-04-26 16:26:56.858908] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:15:47.913 [2024-04-26 16:26:56.858928] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:47.913 [2024-04-26 16:26:56.858944] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:47.913 16:26:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.913 16:26:56 -- target/device_removal.sh@187 -- # ib_count=2 00:15:47.913 16:26:56 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:15:47.913 16:26:56 -- target/device_removal.sh@189 -- # break 00:15:47.913 16:26:56 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:15:47.913 16:26:56 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:15:47.913 16:26:56 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:15:47.913 16:26:56 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:15:47.913 16:26:56 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:15:47.913 16:26:56 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:15:47.913 16:26:56 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:15:47.913 16:26:56 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:15:47.913 16:26:56 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:15:47.913 16:26:56 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:15:47.913 16:26:56 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:47.913 16:26:56 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:47.913 16:26:56 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:47.913 16:26:56 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:47.913 16:26:56 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:15:47.913 16:26:56 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:15:47.913 16:26:56 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:15:48.173 16:26:56 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:15:48.173 16:26:56 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:15:48.173 16:26:56 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:15:48.173 16:26:56 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:15:48.173 16:26:56 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:48.173 16:26:56 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:48.173 16:26:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:48.173 16:26:56 -- common/autotest_common.sh@10 -- # set +x 00:15:48.173 16:26:56 -- target/device_removal.sh@77 -- # grep mlx5_1 00:15:48.173 16:26:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.173 mlx5_1 00:15:48.173 16:26:56 -- target/device_removal.sh@78 -- # return 0 00:15:48.173 16:26:56 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:15:48.173 16:26:56 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:15:48.173 16:26:56 -- target/device_removal.sh@67 -- # echo 1 00:15:48.173 16:26:56 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:15:48.173 16:26:56 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:15:48.173 16:26:56 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:15:48.173 [2024-04-26 16:26:57.034016] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:15:48.173 [2024-04-26 16:26:57.034093] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:48.173 [2024-04-26 16:26:57.039312] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:15:48.173 [2024-04-26 16:26:57.039329] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 92 00:15:48.173 [2024-04-26 16:26:57.039338] rdma.c: 632:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:15:48.173 [2024-04-26 16:26:57.039350] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.173 [2024-04-26 16:26:57.039358] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.173 [2024-04-26 16:26:57.039366] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.173 [2024-04-26 16:26:57.039373] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039381] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039388] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039396] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039403] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039410] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039418] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039425] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039439] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039447] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039455] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039462] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039469] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039476] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039484] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039491] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039499] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039506] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039513] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039520] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039527] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039534] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039542] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039550] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039557] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039565] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039572] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039579] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039586] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039593] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039601] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039608] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039616] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039624] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039631] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039638] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039645] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039653] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039660] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039668] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039675] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039683] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039690] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039697] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039704] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039712] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039719] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039726] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039734] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039743] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039750] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039757] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039764] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039772] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039779] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039787] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039794] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039801] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039808] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039816] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039823] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039831] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039838] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039845] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039853] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039860] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039867] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039874] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039882] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039890] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039897] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039904] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039912] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039919] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039926] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039933] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039941] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039948] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039955] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039962] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039970] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.039977] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.039987] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.039998] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040006] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040015] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040022] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040031] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040038] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040046] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040053] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040061] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040069] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040076] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040083] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.040090] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040098] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.040106] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040113] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040120] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040127] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040135] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040142] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.040150] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040157] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040164] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040172] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040179] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040186] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.040193] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040200] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040208] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040215] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040222] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040229] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.174 [2024-04-26 16:26:57.040237] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.174 [2024-04-26 16:26:57.040244] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.174 [2024-04-26 16:26:57.040251] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040259] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040267] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040274] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040281] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040288] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040295] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040302] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040310] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040317] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040324] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040331] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040339] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040350] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040358] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040366] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040374] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040382] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040389] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040397] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040404] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040411] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040419] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040426] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040433] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040441] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040448] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040455] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040463] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040470] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040477] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040484] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040491] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040499] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040506] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040514] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040521] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040528] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040536] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040543] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040550] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040558] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040565] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040573] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040581] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040588] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040598] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040606] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040614] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040621] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040628] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040635] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040643] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040650] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040658] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040665] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040681] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040688] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040697] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040709] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:48.175 [2024-04-26 16:26:57.040717] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040724] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:15:48.175 [2024-04-26 16:26:57.040732] rdma.c: 618:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:15:48.175 [2024-04-26 16:26:57.040740] rdma.c: 620:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:15:52.366 16:27:00 -- target/device_removal.sh@147 -- # seq 1 10 00:15:52.366 16:27:00 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:15:52.366 16:27:00 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:15:52.366 16:27:00 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:15:52.366 16:27:00 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:15:52.366 16:27:00 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:15:52.366 16:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.366 16:27:00 -- target/device_removal.sh@77 -- # grep mlx5_1 00:15:52.366 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.366 16:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.366 16:27:00 -- target/device_removal.sh@78 -- # return 1 00:15:52.366 16:27:00 -- target/device_removal.sh@149 -- # break 00:15:52.366 16:27:00 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:52.366 16:27:00 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:52.366 16:27:00 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:52.366 16:27:00 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:52.366 16:27:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:52.366 16:27:00 -- common/autotest_common.sh@10 -- # set +x 00:15:52.366 16:27:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:52.366 16:27:00 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:15:52.366 16:27:00 -- target/device_removal.sh@160 -- # rescan_pci 00:15:52.366 16:27:00 -- target/device_removal.sh@57 -- # echo 1 00:15:52.624 [2024-04-26 16:27:01.627050] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1288b90/0x113f810) succeed. 00:15:52.624 [2024-04-26 16:27:01.627124] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:15:52.624 16:27:01 -- target/device_removal.sh@162 -- # seq 1 10 00:15:52.624 16:27:01 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:15:52.624 16:27:01 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:15:52.882 16:27:01 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:15:52.882 16:27:01 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:15:52.882 16:27:01 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:15:52.882 16:27:01 -- target/device_removal.sh@171 -- # break 00:15:52.882 16:27:01 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:15:52.882 16:27:01 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:15:53.450 16:27:02 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:15:53.450 16:27:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:53.450 16:27:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:53.450 16:27:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:53.450 16:27:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:53.450 16:27:02 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:15:53.450 16:27:02 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:15:53.450 [2024-04-26 16:27:02.313958] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:15:53.450 [2024-04-26 16:27:02.313995] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:15:53.450 [2024-04-26 16:27:02.314013] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:53.450 [2024-04-26 16:27:02.314031] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:15:53.450 16:27:02 -- target/device_removal.sh@186 -- # seq 1 10 00:15:53.450 16:27:02 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:15:53.450 16:27:02 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:15:53.450 16:27:02 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:15:53.450 16:27:02 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:15:53.450 16:27:02 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:15:53.450 16:27:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.450 16:27:02 -- common/autotest_common.sh@10 -- # set +x 00:15:53.450 16:27:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.450 16:27:02 -- target/device_removal.sh@187 -- # ib_count=2 00:15:53.450 16:27:02 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:15:53.450 16:27:02 -- target/device_removal.sh@189 -- # break 00:15:53.450 16:27:02 -- target/device_removal.sh@200 -- # stop_bdevperf 00:15:53.450 16:27:02 -- target/device_removal.sh@116 -- # wait 473139 00:17:14.894 0 00:17:14.894 16:28:16 -- target/device_removal.sh@118 -- # killprocess 472947 00:17:14.894 16:28:16 -- common/autotest_common.sh@936 -- # '[' -z 472947 ']' 00:17:14.894 16:28:16 -- common/autotest_common.sh@940 -- # kill -0 472947 00:17:14.894 16:28:17 -- common/autotest_common.sh@941 -- # uname 00:17:14.894 16:28:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.894 16:28:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 472947 00:17:14.894 16:28:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:14.895 16:28:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:14.895 16:28:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 472947' 00:17:14.895 killing process with pid 472947 00:17:14.895 16:28:17 -- common/autotest_common.sh@955 -- # kill 472947 00:17:14.895 16:28:17 -- common/autotest_common.sh@960 -- # wait 472947 00:17:14.895 16:28:17 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:17:14.895 16:28:17 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:17:14.895 [2024-04-26 16:26:45.635216] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:17:14.895 [2024-04-26 16:26:45.635265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472947 ] 00:17:14.895 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.895 [2024-04-26 16:26:45.702960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.895 [2024-04-26 16:26:45.779037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.895 Running I/O for 90 seconds... 00:17:14.895 [2024-04-26 16:26:51.824177] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:14.895 [2024-04-26 16:26:51.824208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.895 [2024-04-26 16:26:51.824221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:14.895 [2024-04-26 16:26:51.824234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.895 [2024-04-26 16:26:51.824244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:14.895 [2024-04-26 16:26:51.824254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.895 [2024-04-26 16:26:51.824264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:14.895 [2024-04-26 16:26:51.824274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.895 [2024-04-26 16:26:51.824283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:14.895 [2024-04-26 16:26:51.824398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:14.895 [2024-04-26 16:26:51.824411] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:14.895 [2024-04-26 16:26:51.824440] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:14.895 [2024-04-26 16:26:51.834177] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.844549] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.854771] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.865033] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.877342] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.887402] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.897438] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.907611] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.917639] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.927936] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.937963] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.948195] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.958220] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.969567] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.980355] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:51.990369] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.000397] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.010627] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.020652] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.030948] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.040975] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.051548] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.062049] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.072572] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.082840] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.092864] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.103139] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.113164] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.123191] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.133218] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.143344] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.153501] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.163529] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.173556] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.183581] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.194269] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.204647] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.215007] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.225031] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.235142] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.245416] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.255746] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.265772] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.275990] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.286364] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.296603] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.306912] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.316938] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.327066] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.337333] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.347668] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.358435] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.368795] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.379132] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.389986] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.400023] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.410049] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.420077] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.430290] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.441088] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.451550] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.462589] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.472612] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.482725] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.492977] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.503003] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.895 [2024-04-26 16:26:52.513219] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.523237] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.533264] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.543289] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.553453] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.563791] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.573816] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.583842] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.593869] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.604161] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.614186] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.624212] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.634465] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.645152] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.655602] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.666222] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.676593] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.687149] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.697539] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.708048] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.718477] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.728590] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.738614] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.748639] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.758762] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.769182] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.779206] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.789548] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.799812] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.809839] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.820709] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.896 [2024-04-26 16:26:52.827050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:205664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:205672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:205680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:205688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:205696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:205704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:205712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:205720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:205728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:205736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:205744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:205752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:205760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:205768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:205776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:205784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:205792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:205800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:205808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:205816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.896 [2024-04-26 16:26:52.827485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:204800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007700000 len:0x1000 key:0x17fa00 00:17:14.896 [2024-04-26 16:26:52.827507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:204808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007702000 len:0x1000 key:0x17fa00 00:17:14.896 [2024-04-26 16:26:52.827527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:204816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007704000 len:0x1000 key:0x17fa00 00:17:14.896 [2024-04-26 16:26:52.827548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:204824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007706000 len:0x1000 key:0x17fa00 00:17:14.896 [2024-04-26 16:26:52.827568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:204832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007708000 len:0x1000 key:0x17fa00 00:17:14.896 [2024-04-26 16:26:52.827589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.896 [2024-04-26 16:26:52.827599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:204840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770a000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:204848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770c000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:204856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000770e000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:204864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007710000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:204872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007712000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:204880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007714000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:204888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007716000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:204896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007718000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:204904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771a000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:204912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771c000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:204920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000771e000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:204928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:204936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:204944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:204952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:204960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:204968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:204976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:204984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.827987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:204992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.827996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:205000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:205008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:205016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:205024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:205032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:205040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:205048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:205056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:205064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:205072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:205080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:205088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:205096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:205104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:205112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.897 [2024-04-26 16:26:52.828318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:205120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x17fa00 00:17:14.897 [2024-04-26 16:26:52.828327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:205128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:205136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:205144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:205152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:205160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:205168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:205176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:205184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:205192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:205200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:205208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:205216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:205224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:205232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:205240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:205248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:205256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:205264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:205272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:205280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:205288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:205296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:205304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:205312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:205320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:205328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:205336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:205344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:205352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:205360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:205368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.828979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.828990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:205376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.829000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.829012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:205384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.829021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.829032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:205392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.829041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.898 [2024-04-26 16:26:52.829053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:205400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x17fa00 00:17:14.898 [2024-04-26 16:26:52.829063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:205408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:205416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:205424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:205432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:205440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:205448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:205456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:205464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:205472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:205480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:205488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:205496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:205504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:205512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:205520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:205528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:205536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:205544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:205552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:205560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:205568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:205576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:205584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:205592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:205600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:205608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:205616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.899 [2024-04-26 16:26:52.829635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:205624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x17fa00 00:17:14.899 [2024-04-26 16:26:52.829644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:52.829655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:205632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x17fa00 00:17:14.900 [2024-04-26 16:26:52.829664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:52.829675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:205640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x17fa00 00:17:14.900 [2024-04-26 16:26:52.829684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:52.838906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:205648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x17fa00 00:17:14.900 [2024-04-26 16:26:52.838920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:52.851953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:14.900 [2024-04-26 16:26:52.851968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:14.900 [2024-04-26 16:26:52.851977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:205656 len:8 PRP1 0x0 PRP2 0x0 00:17:14.900 [2024-04-26 16:26:52.851987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:52.853889] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:14.900 [2024-04-26 16:26:52.854171] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:14.900 [2024-04-26 16:26:52.854187] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:14.900 [2024-04-26 16:26:52.854195] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:14.900 [2024-04-26 16:26:52.854213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:14.900 [2024-04-26 16:26:52.854223] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:14.900 [2024-04-26 16:26:52.854236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:14.900 [2024-04-26 16:26:52.854246] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:14.900 [2024-04-26 16:26:52.854257] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:14.900 [2024-04-26 16:26:52.854278] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.900 [2024-04-26 16:26:52.854287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:14.900 [2024-04-26 16:26:54.859420] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:14.900 [2024-04-26 16:26:54.859456] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:14.900 [2024-04-26 16:26:54.859483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:14.900 [2024-04-26 16:26:54.859495] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:14.900 [2024-04-26 16:26:54.859509] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:14.900 [2024-04-26 16:26:54.859518] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:14.900 [2024-04-26 16:26:54.859529] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:14.900 [2024-04-26 16:26:54.859552] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.900 [2024-04-26 16:26:54.859562] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:14.900 [2024-04-26 16:26:56.865315] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:14.900 [2024-04-26 16:26:56.865351] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:17:14.900 [2024-04-26 16:26:56.865386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:14.900 [2024-04-26 16:26:56.865401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:17:14.900 [2024-04-26 16:26:56.865422] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:17:14.900 [2024-04-26 16:26:56.865433] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:17:14.900 [2024-04-26 16:26:56.865446] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:17:14.900 [2024-04-26 16:26:56.865480] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.900 [2024-04-26 16:26:56.865491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:17:14.900 [2024-04-26 16:26:57.034759] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:14.900 [2024-04-26 16:26:57.034788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.900 [2024-04-26 16:26:57.034801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:57.034814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.900 [2024-04-26 16:26:57.034823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:57.034833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.900 [2024-04-26 16:26:57.034843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:57.034852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:14.900 [2024-04-26 16:26:57.034862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32661 cdw0:16 sqhd:53b9 p:0 m:0 dnr:0 00:17:14.900 [2024-04-26 16:26:57.040764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:14.900 [2024-04-26 16:26:57.040784] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:14.900 [2024-04-26 16:26:57.040810] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:14.900 [2024-04-26 16:26:57.044767] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.054792] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.064817] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.074843] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.084867] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.094893] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.104917] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.114943] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.124969] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.134996] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.145020] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.155046] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.165070] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.175095] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.185120] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.195144] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.205170] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.215194] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.225220] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.235247] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.245273] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.255299] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.265323] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.275354] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.285378] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.295404] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.305430] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.315455] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.325479] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.335503] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.345527] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.355553] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.365578] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.375604] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.385630] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.395655] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.900 [2024-04-26 16:26:57.405680] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.415705] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.425731] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.435756] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.445782] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.455808] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.465833] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.475859] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.485886] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.495913] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.505938] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.515965] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.525991] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.536016] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.546040] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.556065] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.566090] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.576115] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.586140] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.596164] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.606190] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.616215] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.626242] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.636269] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.646294] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.656319] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.666343] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.676368] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.686393] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.696417] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.706443] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.716470] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.726496] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.736519] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.746545] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.756571] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.766595] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.776621] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.786647] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.796672] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.806697] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.816723] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.826749] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.836775] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.846799] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.856824] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.866986] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.877016] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.894006] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.902396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:14.901 [2024-04-26 16:26:57.904002] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.914028] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.924053] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.934078] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.944104] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.954131] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.964157] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.974182] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.984208] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:57.994232] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:58.004259] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:58.014284] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:58.024311] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:58.034338] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:14.901 [2024-04-26 16:26:58.043219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1ffa00 00:17:14.901 [2024-04-26 16:26:58.043233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1ffa00 00:17:14.901 [2024-04-26 16:26:58.043260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.901 [2024-04-26 16:26:58.043454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.901 [2024-04-26 16:26:58.043464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.043986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.043997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.902 [2024-04-26 16:26:58.044226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.902 [2024-04-26 16:26:58.044236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.903 [2024-04-26 16:26:58.044952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.903 [2024-04-26 16:26:58.044963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.044972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.044983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.044992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.904 [2024-04-26 16:26:58.045692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.904 [2024-04-26 16:26:58.045701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.905 [2024-04-26 16:26:58.045712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.905 [2024-04-26 16:26:58.045721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.905 [2024-04-26 16:26:58.045732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.905 [2024-04-26 16:26:58.045741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.905 [2024-04-26 16:26:58.045752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.905 [2024-04-26 16:26:58.045761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.905 [2024-04-26 16:26:58.045771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:14.905 [2024-04-26 16:26:58.045780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32661 cdw0:50b70770 sqhd:2530 p:0 m:0 dnr:0 00:17:14.905 [2024-04-26 16:26:58.058698] rdma_verbs.c: 83:spdk_rdma_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:17:14.905 [2024-04-26 16:26:58.058759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:17:14.905 [2024-04-26 16:26:58.058768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:17:14.905 [2024-04-26 16:26:58.058777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107496 len:8 PRP1 0x0 PRP2 0x0 00:17:14.905 [2024-04-26 16:26:58.058787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:14.905 [2024-04-26 16:26:58.058836] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:14.905 [2024-04-26 16:26:58.059060] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:17:14.905 [2024-04-26 16:26:58.059073] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:14.905 [2024-04-26 16:26:58.059081] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:14.905 [2024-04-26 16:26:58.059098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:14.905 [2024-04-26 16:26:58.059108] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:14.905 [2024-04-26 16:26:58.059120] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:14.905 [2024-04-26 16:26:58.059129] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:14.905 [2024-04-26 16:26:58.059139] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:14.905 [2024-04-26 16:26:58.059156] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.905 [2024-04-26 16:26:58.059165] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:14.905 [2024-04-26 16:27:00.067057] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:14.905 [2024-04-26 16:27:00.067108] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:14.905 [2024-04-26 16:27:00.067140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:14.905 [2024-04-26 16:27:00.067152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:14.905 [2024-04-26 16:27:00.067178] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:14.905 [2024-04-26 16:27:00.067189] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:14.905 [2024-04-26 16:27:00.067200] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:14.905 [2024-04-26 16:27:00.067238] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.905 [2024-04-26 16:27:00.067249] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:14.905 [2024-04-26 16:27:02.072900] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:17:14.905 [2024-04-26 16:27:02.072944] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:17:14.905 [2024-04-26 16:27:02.072974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:14.905 [2024-04-26 16:27:02.072985] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:17:14.905 [2024-04-26 16:27:02.073011] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:17:14.905 [2024-04-26 16:27:02.073021] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:17:14.905 [2024-04-26 16:27:02.073033] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:17:14.905 [2024-04-26 16:27:02.073073] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:14.905 [2024-04-26 16:27:02.073084] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:17:14.905 [2024-04-26 16:27:03.118024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:14.905 00:17:14.905 Latency(us) 00:17:14.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.905 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:14.905 Verification LBA range: start 0x0 length 0x8000 00:17:14.905 Nvme_mlx_0_0n1 : 90.01 11169.46 43.63 0.00 0.00 11440.10 933.18 7061019.60 00:17:14.905 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:14.905 Verification LBA range: start 0x0 length 0x8000 00:17:14.905 Nvme_mlx_0_1n1 : 90.01 10668.08 41.67 0.00 0.00 11979.79 2336.50 7061019.60 00:17:14.905 =================================================================================================================== 00:17:14.905 Total : 21837.54 85.30 0.00 0.00 11703.76 933.18 7061019.60 00:17:14.905 Received shutdown signal, test time was about 90.000000 seconds 00:17:14.905 00:17:14.905 Latency(us) 00:17:14.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.905 =================================================================================================================== 00:17:14.905 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:14.905 16:28:17 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:17:14.905 16:28:17 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:17:14.905 16:28:17 -- target/device_removal.sh@202 -- # killprocess 472742 00:17:14.905 16:28:17 -- common/autotest_common.sh@936 -- # '[' -z 472742 ']' 00:17:14.905 16:28:17 -- common/autotest_common.sh@940 -- # kill -0 472742 00:17:14.905 16:28:17 -- common/autotest_common.sh@941 -- # uname 00:17:14.905 16:28:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.905 16:28:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 472742 00:17:14.905 16:28:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:14.905 16:28:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:14.905 16:28:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 472742' 00:17:14.905 killing process with pid 472742 00:17:14.905 16:28:17 -- common/autotest_common.sh@955 -- # kill 472742 00:17:14.905 16:28:17 -- common/autotest_common.sh@960 -- # wait 472742 00:17:14.905 [2024-04-26 16:28:17.387125] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:17:14.905 16:28:17 -- target/device_removal.sh@203 -- # nvmfpid= 00:17:14.905 16:28:17 -- target/device_removal.sh@205 -- # return 0 00:17:14.905 00:17:14.905 real 1m33.254s 00:17:14.905 user 4m32.934s 00:17:14.905 sys 0m3.650s 00:17:14.905 16:28:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:14.905 16:28:17 -- common/autotest_common.sh@10 -- # set +x 00:17:14.905 ************************************ 00:17:14.905 END TEST nvmf_device_removal_pci_remove_no_srq 00:17:14.905 ************************************ 00:17:14.905 16:28:17 -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:17:14.905 16:28:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:14.905 16:28:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:14.905 16:28:17 -- common/autotest_common.sh@10 -- # set +x 00:17:14.905 ************************************ 00:17:14.905 START TEST nvmf_device_removal_pci_remove 00:17:14.905 ************************************ 00:17:14.905 16:28:17 -- common/autotest_common.sh@1111 -- # test_remove_and_rescan 00:17:14.905 16:28:17 -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:17:14.905 16:28:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:14.905 16:28:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:14.905 16:28:17 -- common/autotest_common.sh@10 -- # set +x 00:17:14.905 16:28:17 -- nvmf/common.sh@470 -- # nvmfpid=485848 00:17:14.905 16:28:17 -- nvmf/common.sh@471 -- # waitforlisten 485848 00:17:14.905 16:28:17 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:14.905 16:28:17 -- common/autotest_common.sh@817 -- # '[' -z 485848 ']' 00:17:14.905 16:28:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.905 16:28:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:14.905 16:28:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.905 16:28:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:14.905 16:28:17 -- common/autotest_common.sh@10 -- # set +x 00:17:14.905 [2024-04-26 16:28:17.976965] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:17:14.905 [2024-04-26 16:28:17.977019] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.905 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.905 [2024-04-26 16:28:18.047759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:14.905 [2024-04-26 16:28:18.127604] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.905 [2024-04-26 16:28:18.127646] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.905 [2024-04-26 16:28:18.127655] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.905 [2024-04-26 16:28:18.127680] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.906 [2024-04-26 16:28:18.127688] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.906 [2024-04-26 16:28:18.127891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.906 [2024-04-26 16:28:18.127895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.906 16:28:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:14.906 16:28:18 -- common/autotest_common.sh@850 -- # return 0 00:17:14.906 16:28:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:14.906 16:28:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:14.906 16:28:18 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.906 16:28:18 -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:17:14.906 16:28:18 -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:17:14.906 16:28:18 -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:17:14.906 16:28:18 -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:14.906 16:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:18 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 [2024-04-26 16:28:18.843434] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf15c90/0xf1a180) succeed. 00:17:14.906 [2024-04-26 16:28:18.852324] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf17190/0xf5b810) succeed. 00:17:14.906 16:28:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:18 -- target/device_removal.sh@49 -- # get_rdma_if_list 00:17:14.906 16:28:18 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:14.906 16:28:18 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:14.906 16:28:18 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:14.906 16:28:18 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:14.906 16:28:18 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:14.906 16:28:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:14.906 16:28:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.906 16:28:18 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:14.906 16:28:18 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:17:14.906 16:28:18 -- nvmf/common.sh@105 -- # continue 2 00:17:14.906 16:28:18 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:14.906 16:28:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.906 16:28:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:14.906 16:28:18 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:14.906 16:28:18 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:14.906 16:28:18 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:17:14.906 16:28:18 -- nvmf/common.sh@105 -- # continue 2 00:17:14.906 16:28:18 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:17:14.906 16:28:18 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_0 00:17:14.906 16:28:18 -- target/device_removal.sh@25 -- # local -a dev_name 00:17:14.906 16:28:18 -- target/device_removal.sh@27 -- # dev_name=mlx_0_0 00:17:14.906 16:28:18 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_0 00:17:14.906 16:28:18 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_0 00:17:14.906 16:28:18 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:17:14.906 16:28:18 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:17:14.906 16:28:18 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_0 00:17:14.906 16:28:18 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:14.906 16:28:18 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:14.906 16:28:18 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:14.906 16:28:18 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:14.906 16:28:18 -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:17:14.906 16:28:18 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_0 00:17:14.906 16:28:18 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:17:14.906 16:28:18 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:17:14.906 16:28:18 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_0 00:17:14.906 16:28:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:18 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:19 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_0 -a -s SPDK000mlx_0_0 00:17:14.906 16:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:19 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_0 mlx_0_0 00:17:14.906 16:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:19 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 00:17:14.906 16:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 [2024-04-26 16:28:19.045918] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:14.906 16:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:19 -- target/device_removal.sh@41 -- # return 0 00:17:14.906 16:28:19 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_0 00:17:14.906 16:28:19 -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:17:14.906 16:28:19 -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@25 -- # local -a dev_name 00:17:14.906 16:28:19 -- target/device_removal.sh@27 -- # dev_name=mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@28 -- # malloc_name=mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@29 -- # get_subsystem_nqn mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@30 -- # get_ip_address mlx_0_1 00:17:14.906 16:28:19 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:14.906 16:28:19 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:14.906 16:28:19 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:14.906 16:28:19 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:14.906 16:28:19 -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:17:14.906 16:28:19 -- target/device_removal.sh@31 -- # serial=SPDK000mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:17:14.906 16:28:19 -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:17:14.906 16:28:19 -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b mlx_0_1 00:17:14.906 16:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:19 -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_mlx_0_1 -a -s SPDK000mlx_0_1 00:17:14.906 16:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:19 -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_mlx_0_1 mlx_0_1 00:17:14.906 16:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:19 -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 00:17:14.906 16:28:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 [2024-04-26 16:28:19.130004] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:17:14.906 16:28:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:19 -- target/device_removal.sh@41 -- # return 0 00:17:14.906 16:28:19 -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@53 -- # return 0 00:17:14.906 16:28:19 -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf mlx_0_0 mlx_0_1 00:17:14.906 16:28:19 -- target/device_removal.sh@87 -- # dev_names=('mlx_0_0' 'mlx_0_1') 00:17:14.906 16:28:19 -- target/device_removal.sh@87 -- # local dev_names 00:17:14.906 16:28:19 -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:14.906 16:28:19 -- target/device_removal.sh@91 -- # bdevperf_pid=485998 00:17:14.906 16:28:19 -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.906 16:28:19 -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:14.906 16:28:19 -- target/device_removal.sh@94 -- # waitforlisten 485998 /var/tmp/bdevperf.sock 00:17:14.906 16:28:19 -- common/autotest_common.sh@817 -- # '[' -z 485998 ']' 00:17:14.906 16:28:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:14.906 16:28:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:14.906 16:28:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:14.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:14.906 16:28:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:14.906 16:28:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:14.906 16:28:20 -- common/autotest_common.sh@850 -- # return 0 00:17:14.906 16:28:20 -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:14.906 16:28:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.906 16:28:20 -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 16:28:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.906 16:28:20 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:17:14.906 16:28:20 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_0 00:17:14.906 16:28:20 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_0 00:17:14.906 16:28:20 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_0 00:17:14.906 16:28:20 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_0 00:17:14.906 16:28:20 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:14.906 16:28:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:14.906 16:28:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:14.906 16:28:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:14.906 16:28:20 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:17:14.907 16:28:20 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_0 -l -1 -o 1 00:17:14.907 16:28:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.907 16:28:20 -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 Nvme_mlx_0_0n1 00:17:14.907 16:28:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.907 16:28:20 -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:17:14.907 16:28:20 -- target/device_removal.sh@101 -- # get_subsystem_nqn mlx_0_1 00:17:14.907 16:28:20 -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_mlx_0_1 00:17:14.907 16:28:20 -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_mlx_0_1 00:17:14.907 16:28:20 -- target/device_removal.sh@102 -- # get_ip_address mlx_0_1 00:17:14.907 16:28:20 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:14.907 16:28:20 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:14.907 16:28:20 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:14.907 16:28:20 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:14.907 16:28:20 -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:17:14.907 16:28:20 -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_mlx_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_mlx_0_1 -l -1 -o 1 00:17:14.907 16:28:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.907 16:28:20 -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 Nvme_mlx_0_1n1 00:17:14.907 16:28:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.907 16:28:20 -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=486094 00:17:14.907 16:28:20 -- target/device_removal.sh@112 -- # sleep 5 00:17:14.907 16:28:20 -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:16.285 16:28:25 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:17:16.285 16:28:25 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_0 00:17:16.285 16:28:25 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_0 00:17:16.285 16:28:25 -- target/device_removal.sh@71 -- # dev_name=mlx_0_0 00:17:16.285 16:28:25 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_0 00:17:16.285 16:28:25 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:17:16.285 16:28:25 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:17:16.285 16:28:25 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/infiniband 00:17:16.285 16:28:25 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_0 00:17:16.285 16:28:25 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_0 00:17:16.285 16:28:25 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:16.285 16:28:25 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:16.285 16:28:25 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:16.285 16:28:25 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:16.285 16:28:25 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:17:16.285 16:28:25 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_0 00:17:16.285 16:28:25 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:17:16.286 16:28:25 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:17:16.286 16:28:25 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0 00:17:16.286 16:28:25 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:17:16.286 16:28:25 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:17:16.286 16:28:25 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:16.286 16:28:25 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:16.286 16:28:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.286 16:28:25 -- common/autotest_common.sh@10 -- # set +x 00:17:16.286 16:28:25 -- target/device_removal.sh@77 -- # grep mlx5_0 00:17:16.545 16:28:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.545 mlx5_0 00:17:16.545 16:28:25 -- target/device_removal.sh@78 -- # return 0 00:17:16.545 16:28:25 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_0 00:17:16.545 16:28:25 -- target/device_removal.sh@66 -- # dev_name=mlx_0_0 00:17:16.545 16:28:25 -- target/device_removal.sh@67 -- # echo 1 00:17:16.545 16:28:25 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_0 00:17:16.545 16:28:25 -- target/device_removal.sh@61 -- # dev_name=mlx_0_0 00:17:16.545 16:28:25 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.0/net/mlx_0_0/device 00:17:16.545 [2024-04-26 16:28:25.370613] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device mlx5_0 is being removed. 00:17:16.545 [2024-04-26 16:28:25.370822] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:16.545 [2024-04-26 16:28:25.376933] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:16.545 [2024-04-26 16:28:25.376958] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 94 00:17:19.836 16:28:28 -- target/device_removal.sh@147 -- # seq 1 10 00:17:19.836 16:28:28 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:17:19.836 16:28:28 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_0 00:17:19.836 16:28:28 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_0 00:17:19.836 16:28:28 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:19.836 16:28:28 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:19.836 16:28:28 -- target/device_removal.sh@77 -- # grep mlx5_0 00:17:19.836 16:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.836 16:28:28 -- common/autotest_common.sh@10 -- # set +x 00:17:19.836 16:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.836 16:28:28 -- target/device_removal.sh@78 -- # return 1 00:17:19.836 16:28:28 -- target/device_removal.sh@149 -- # break 00:17:19.836 16:28:28 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:19.836 16:28:28 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:19.836 16:28:28 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:19.836 16:28:28 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:19.836 16:28:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.836 16:28:28 -- common/autotest_common.sh@10 -- # set +x 00:17:19.836 16:28:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.836 16:28:28 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:17:19.836 16:28:28 -- target/device_removal.sh@160 -- # rescan_pci 00:17:19.836 16:28:28 -- target/device_removal.sh@57 -- # echo 1 00:17:20.774 [2024-04-26 16:28:29.689280] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf18920/0xf1a180) succeed. 00:17:20.774 [2024-04-26 16:28:29.689362] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:17:20.774 16:28:29 -- target/device_removal.sh@162 -- # seq 1 10 00:17:20.774 16:28:29 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:17:20.774 16:28:29 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/net 00:17:20.774 16:28:29 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_0 00:17:20.774 16:28:29 -- target/device_removal.sh@164 -- # [[ -z mlx_0_0 ]] 00:17:20.774 16:28:29 -- target/device_removal.sh@166 -- # [[ mlx_0_0 != \m\l\x\_\0\_\0 ]] 00:17:20.774 16:28:29 -- target/device_removal.sh@171 -- # break 00:17:20.774 16:28:29 -- target/device_removal.sh@175 -- # [[ -z mlx_0_0 ]] 00:17:20.774 16:28:29 -- target/device_removal.sh@179 -- # ip link set mlx_0_0 up 00:17:21.342 16:28:30 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_0 00:17:21.342 16:28:30 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:17:21.342 16:28:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:17:21.342 16:28:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:21.342 16:28:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:21.342 16:28:30 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:17:21.342 16:28:30 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev mlx_0_0 00:17:21.601 16:28:30 -- target/device_removal.sh@186 -- # seq 1 10 00:17:21.601 16:28:30 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:17:21.601 16:28:30 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:21.601 16:28:30 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:21.601 16:28:30 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:21.601 16:28:30 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:21.601 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.601 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:17:21.601 [2024-04-26 16:28:30.386562] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:21.601 [2024-04-26 16:28:30.386605] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:17:21.601 [2024-04-26 16:28:30.386626] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:21.601 [2024-04-26 16:28:30.386644] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:21.601 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.601 16:28:30 -- target/device_removal.sh@187 -- # ib_count=2 00:17:21.601 16:28:30 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:17:21.601 16:28:30 -- target/device_removal.sh@189 -- # break 00:17:21.601 16:28:30 -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:17:21.601 16:28:30 -- target/device_removal.sh@135 -- # nvme_dev=mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@136 -- # get_rdma_device_name mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@71 -- # dev_name=mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@72 -- # get_pci_dir mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:17:21.601 16:28:30 -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/infiniband 00:17:21.601 16:28:30 -- target/device_removal.sh@136 -- # rdma_dev_name=mlx5_1 00:17:21.601 16:28:30 -- target/device_removal.sh@137 -- # get_ip_address mlx_0_1 00:17:21.601 16:28:30 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:21.601 16:28:30 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:21.601 16:28:30 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:21.601 16:28:30 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:21.601 16:28:30 -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:17:21.601 16:28:30 -- target/device_removal.sh@138 -- # get_pci_dir mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:17:21.601 16:28:30 -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1 00:17:21.601 16:28:30 -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:17:21.601 16:28:30 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:17:21.601 16:28:30 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:21.601 16:28:30 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:21.601 16:28:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.601 16:28:30 -- target/device_removal.sh@77 -- # grep mlx5_1 00:17:21.601 16:28:30 -- common/autotest_common.sh@10 -- # set +x 00:17:21.601 16:28:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.601 mlx5_1 00:17:21.601 16:28:30 -- target/device_removal.sh@78 -- # return 0 00:17:21.601 16:28:30 -- target/device_removal.sh@145 -- # remove_one_nic mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@66 -- # dev_name=mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@67 -- # echo 1 00:17:21.601 16:28:30 -- target/device_removal.sh@67 -- # get_pci_dir mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@61 -- # dev_name=mlx_0_1 00:17:21.601 16:28:30 -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:18:00.1/net/mlx_0_1/device 00:17:21.601 [2024-04-26 16:28:30.570614] rdma.c:3563:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device mlx5_1 is being removed. 00:17:21.601 [2024-04-26 16:28:30.570697] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:21.601 [2024-04-26 16:28:30.574223] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:17:21.601 [2024-04-26 16:28:30.574241] rdma.c: 845:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 127 00:17:24.887 16:28:33 -- target/device_removal.sh@147 -- # seq 1 10 00:17:24.887 16:28:33 -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:17:24.887 16:28:33 -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt mlx5_1 00:17:24.887 16:28:33 -- target/device_removal.sh@76 -- # local rdma_dev_name=mlx5_1 00:17:24.887 16:28:33 -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:17:24.887 16:28:33 -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:17:24.887 16:28:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.887 16:28:33 -- common/autotest_common.sh@10 -- # set +x 00:17:24.887 16:28:33 -- target/device_removal.sh@77 -- # grep mlx5_1 00:17:25.145 16:28:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.145 16:28:33 -- target/device_removal.sh@78 -- # return 1 00:17:25.145 16:28:33 -- target/device_removal.sh@149 -- # break 00:17:25.145 16:28:33 -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:25.145 16:28:33 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:25.145 16:28:33 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:25.145 16:28:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:25.145 16:28:33 -- common/autotest_common.sh@10 -- # set +x 00:17:25.145 16:28:33 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:25.145 16:28:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:25.145 16:28:33 -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:17:25.145 16:28:33 -- target/device_removal.sh@160 -- # rescan_pci 00:17:25.145 16:28:33 -- target/device_removal.sh@57 -- # echo 1 00:17:26.081 [2024-04-26 16:28:34.784126] rdma.c:3252:nvmf_rdma_rescan_devices: *WARNING*: Failed to init ibv device 0xff3800, err 11. Skip rescan. 00:17:26.081 [2024-04-26 16:28:34.874747] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf18fa0/0xf5b810) succeed. 00:17:26.081 [2024-04-26 16:28:34.874821] rdma.c:3305:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:17:26.081 16:28:34 -- target/device_removal.sh@162 -- # seq 1 10 00:17:26.081 16:28:34 -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:17:26.081 16:28:34 -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.1/net 00:17:26.081 16:28:34 -- target/device_removal.sh@163 -- # new_net_dev=mlx_0_1 00:17:26.081 16:28:34 -- target/device_removal.sh@164 -- # [[ -z mlx_0_1 ]] 00:17:26.081 16:28:34 -- target/device_removal.sh@166 -- # [[ mlx_0_1 != \m\l\x\_\0\_\1 ]] 00:17:26.081 16:28:34 -- target/device_removal.sh@171 -- # break 00:17:26.081 16:28:34 -- target/device_removal.sh@175 -- # [[ -z mlx_0_1 ]] 00:17:26.081 16:28:34 -- target/device_removal.sh@179 -- # ip link set mlx_0_1 up 00:17:26.649 16:28:35 -- target/device_removal.sh@180 -- # get_ip_address mlx_0_1 00:17:26.649 16:28:35 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:17:26.649 16:28:35 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:26.649 16:28:35 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:17:26.649 16:28:35 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:26.649 16:28:35 -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:17:26.649 16:28:35 -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev mlx_0_1 00:17:26.649 16:28:35 -- target/device_removal.sh@186 -- # seq 1 10 00:17:26.649 [2024-04-26 16:28:35.569841] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:17:26.649 [2024-04-26 16:28:35.569875] rdma.c:3311:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:17:26.649 [2024-04-26 16:28:35.569892] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:26.649 [2024-04-26 16:28:35.569909] rdma.c:3841:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:17:26.649 16:28:35 -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:17:26.649 16:28:35 -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:17:26.649 16:28:35 -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:17:26.649 16:28:35 -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:17:26.649 16:28:35 -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:17:26.649 16:28:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:26.649 16:28:35 -- common/autotest_common.sh@10 -- # set +x 00:17:26.649 16:28:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:26.649 16:28:35 -- target/device_removal.sh@187 -- # ib_count=2 00:17:26.649 16:28:35 -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:17:26.649 16:28:35 -- target/device_removal.sh@189 -- # break 00:17:26.649 16:28:35 -- target/device_removal.sh@200 -- # stop_bdevperf 00:17:26.649 16:28:35 -- target/device_removal.sh@116 -- # wait 486094 00:18:48.097 0 00:18:48.097 16:29:50 -- target/device_removal.sh@118 -- # killprocess 485998 00:18:48.097 16:29:50 -- common/autotest_common.sh@936 -- # '[' -z 485998 ']' 00:18:48.097 16:29:50 -- common/autotest_common.sh@940 -- # kill -0 485998 00:18:48.097 16:29:50 -- common/autotest_common.sh@941 -- # uname 00:18:48.097 16:29:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:48.097 16:29:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 485998 00:18:48.097 16:29:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:48.097 16:29:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:48.097 16:29:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 485998' 00:18:48.097 killing process with pid 485998 00:18:48.097 16:29:50 -- common/autotest_common.sh@955 -- # kill 485998 00:18:48.097 16:29:50 -- common/autotest_common.sh@960 -- # wait 485998 00:18:48.097 16:29:50 -- target/device_removal.sh@119 -- # bdevperf_pid= 00:18:48.097 16:29:50 -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:18:48.097 [2024-04-26 16:28:19.189204] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:18:48.098 [2024-04-26 16:28:19.189255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid485998 ] 00:18:48.098 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.098 [2024-04-26 16:28:19.257476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.098 [2024-04-26 16:28:19.333341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.098 Running I/O for 90 seconds... 00:18:48.098 [2024-04-26 16:28:25.375737] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:48.098 [2024-04-26 16:28:25.375773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.098 [2024-04-26 16:28:25.375786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32602 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:18:48.098 [2024-04-26 16:28:25.375798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.098 [2024-04-26 16:28:25.375808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32602 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:18:48.098 [2024-04-26 16:28:25.375818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.098 [2024-04-26 16:28:25.375828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32602 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:18:48.098 [2024-04-26 16:28:25.375837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.098 [2024-04-26 16:28:25.375847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32602 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:18:48.098 [2024-04-26 16:28:25.378640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:48.098 [2024-04-26 16:28:25.378659] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:48.098 [2024-04-26 16:28:25.378689] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:48.098 [2024-04-26 16:28:25.385740] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.395760] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.405874] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.415901] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.425927] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.436012] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.446323] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.457304] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.467413] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.478101] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.488640] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.499046] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.509087] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.519111] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.529469] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.539498] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.549629] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.559657] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.569684] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.579711] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.589734] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.599760] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.609783] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.619809] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.629833] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.639858] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.649883] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.659907] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.670838] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.680855] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.690881] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.700907] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.710932] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.721133] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.731435] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.741734] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.751824] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.762185] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.772385] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.783232] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.793640] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.804563] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.814757] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.824775] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.834801] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.844945] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.854971] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.864997] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.875022] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.885182] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.895197] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.905224] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.915531] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.927848] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.937873] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.948240] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.958264] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.098 [2024-04-26 16:28:25.968291] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:25.978364] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:25.988390] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:25.998470] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.008496] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.018601] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.028629] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.038654] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.048679] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.058703] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.069368] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.079394] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.089419] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.099447] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.109643] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.120022] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.130164] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.140901] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.152524] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.163031] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.173117] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.183465] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.193491] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.205100] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.215513] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.225537] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.235563] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.245769] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.255795] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.265819] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.275845] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.288535] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.298560] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.308688] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.318731] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.328746] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.338770] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.348797] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.358823] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.368851] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.378877] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.099 [2024-04-26 16:28:26.381098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:161672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:161680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:161688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:161696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:161704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:161712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:161720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:161728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:161736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:161744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:161752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:161760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:161768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.099 [2024-04-26 16:28:26.381376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.099 [2024-04-26 16:28:26.381387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:161776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.100 [2024-04-26 16:28:26.381398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:161784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.100 [2024-04-26 16:28:26.381417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:160768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:160776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:160784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:160792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:160800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:160808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:160816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:160824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:160832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:160840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:160848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:160856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:160864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:160872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:160880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:160896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:160904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:160912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:160920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:160928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:160936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:160944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:160960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:160968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.381984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:160984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.381993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.100 [2024-04-26 16:28:26.382005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:160992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x1800ef 00:18:48.100 [2024-04-26 16:28:26.382014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:161000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:161008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:161016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:161024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:161032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:161040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:161048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:161056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:161064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:161072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:161080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:161088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:161096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:161104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:161112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:161120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:161128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:161136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:161144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:161152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:161160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:161168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:161176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:161184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:161192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007794000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:161200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007792000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:161208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007790000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:161216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778e000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.101 [2024-04-26 16:28:26.382600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:161224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778c000 len:0x1000 key:0x1800ef 00:18:48.101 [2024-04-26 16:28:26.382609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:161232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000778a000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:161240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007788000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:161248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007786000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:161256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007784000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:161264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007782000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:161272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007780000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:161280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777e000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:161288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777c000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:161296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000777a000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:161304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007778000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:161312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007776000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:161320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007774000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:161328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007772000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:161336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007770000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:161344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776e000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:161352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776c000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:161360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000776a000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:161368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007768000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.382988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:161376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007766000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.382997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:161384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007764000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:161392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007762000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:161400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007760000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:161408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775e000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:161416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775c000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:161424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000775a000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:161432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007758000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:161440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007756000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:161448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007754000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:161456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007752000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.102 [2024-04-26 16:28:26.383211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:161464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007750000 len:0x1000 key:0x1800ef 00:18:48.102 [2024-04-26 16:28:26.383220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:161472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774e000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:161480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774c000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:161488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000774a000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:161496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007748000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:161504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007746000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:161512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007744000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:161520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007742000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:161528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007740000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:161536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773e000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:161544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773c000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:161552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000773a000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:161560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007738000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:161568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007736000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:161576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007734000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:161584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007732000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:161592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007730000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:161600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772e000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:161608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772c000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:161616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000772a000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:161624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007728000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:161632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007726000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:161640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007724000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:161648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007722000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.383708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:161656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007720000 len:0x1000 key:0x1800ef 00:18:48.103 [2024-04-26 16:28:26.383717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.103 [2024-04-26 16:28:26.405294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:48.104 [2024-04-26 16:28:26.405310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:48.104 [2024-04-26 16:28:26.405320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:161664 len:8 PRP1 0x0 PRP2 0x0 00:18:48.104 [2024-04-26 16:28:26.405330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.104 [2024-04-26 16:28:26.408532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:48.104 [2024-04-26 16:28:26.408811] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:18:48.104 [2024-04-26 16:28:26.408827] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:48.104 [2024-04-26 16:28:26.408835] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:18:48.104 [2024-04-26 16:28:26.408855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:48.104 [2024-04-26 16:28:26.408866] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:48.104 [2024-04-26 16:28:26.408885] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:48.104 [2024-04-26 16:28:26.408895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:48.104 [2024-04-26 16:28:26.408905] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:48.104 [2024-04-26 16:28:26.408925] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.104 [2024-04-26 16:28:26.408934] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:48.104 [2024-04-26 16:28:28.413851] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:48.104 [2024-04-26 16:28:28.413888] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:18:48.104 [2024-04-26 16:28:28.413915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:48.104 [2024-04-26 16:28:28.413926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:48.104 [2024-04-26 16:28:28.413939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:48.104 [2024-04-26 16:28:28.413949] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:48.104 [2024-04-26 16:28:28.413960] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:48.104 [2024-04-26 16:28:28.413983] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.104 [2024-04-26 16:28:28.413993] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:48.104 [2024-04-26 16:28:30.418983] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:48.104 [2024-04-26 16:28:30.419015] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:18:48.104 [2024-04-26 16:28:30.419053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:48.104 [2024-04-26 16:28:30.419066] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] in failed state. 00:18:48.104 [2024-04-26 16:28:30.419082] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] Ctrlr is in error state 00:18:48.104 [2024-04-26 16:28:30.419092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_0] controller reinitialization failed 00:18:48.104 [2024-04-26 16:28:30.419104] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] already in failed state 00:18:48.104 [2024-04-26 16:28:30.419130] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.104 [2024-04-26 16:28:30.419141] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_0] resetting controller 00:18:48.104 [2024-04-26 16:28:30.569408] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:48.104 [2024-04-26 16:28:30.569438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.104 [2024-04-26 16:28:30.569450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32602 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:18:48.104 [2024-04-26 16:28:30.569461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.104 [2024-04-26 16:28:30.569471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32602 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:18:48.104 [2024-04-26 16:28:30.569481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.104 [2024-04-26 16:28:30.569491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32602 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:18:48.104 [2024-04-26 16:28:30.569500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.104 [2024-04-26 16:28:30.569509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32602 cdw0:16 sqhd:b3b9 p:0 m:0 dnr:0 00:18:48.104 [2024-04-26 16:28:30.575343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:48.104 [2024-04-26 16:28:30.575372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:48.104 [2024-04-26 16:28:30.575405] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:18:48.104 [2024-04-26 16:28:30.579408] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.589431] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.599456] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.609480] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.619506] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.629532] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.639559] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.649583] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.659610] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.669636] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.679662] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.689688] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.699714] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.709740] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.719766] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.729792] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.739816] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.749841] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.759868] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.769892] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.779917] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.789943] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.799969] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.809996] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.820020] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.830045] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.840069] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.850096] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.860123] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.870148] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.104 [2024-04-26 16:28:30.880173] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.890198] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.900225] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.910251] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.920276] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.930302] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.940328] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.950352] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.960379] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.970405] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.980431] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:30.990455] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.000482] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.010508] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.020535] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.030560] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.040587] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.050614] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.060640] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.070666] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.080690] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.090715] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.100740] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.110766] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.120793] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.130819] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.140843] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.150870] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.160896] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.170921] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.180945] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.190971] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.200997] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.211023] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.221049] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.231074] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.241099] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.251125] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.261151] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.271175] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.281202] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.291227] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.301252] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.311278] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.321302] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.331327] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.341352] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.351378] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.361402] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.371427] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.381453] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.391480] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.401506] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.411532] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.421587] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.431594] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.448542] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.456310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:48.105 [2024-04-26 16:28:31.458539] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.468560] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.478582] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.488606] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.498631] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.508657] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.518684] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.528708] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.538733] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.548760] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.558785] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.568809] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:18:48.105 [2024-04-26 16:28:31.577811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.105 [2024-04-26 16:28:31.577824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.105 [2024-04-26 16:28:31.577840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.105 [2024-04-26 16:28:31.577849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.105 [2024-04-26 16:28:31.577861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.105 [2024-04-26 16:28:31.577870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.577880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.577890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.577901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.577910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.577921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.577930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.577941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.577950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.577960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.577969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.577980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.577989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:41312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.106 [2024-04-26 16:28:31.578558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.106 [2024-04-26 16:28:31.578568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.578988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.578997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.107 [2024-04-26 16:28:31.579188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.107 [2024-04-26 16:28:31.579197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.108 [2024-04-26 16:28:31.579717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.108 [2024-04-26 16:28:31.579727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.109 [2024-04-26 16:28:31.579746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.109 [2024-04-26 16:28:31.579766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:48.109 [2024-04-26 16:28:31.579787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007900000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007902000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007904000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007906000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007908000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790a000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790c000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000790e000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007910000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.579980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007912000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.579990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007914000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007916000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007918000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791a000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791c000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000791e000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007920000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007922000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007924000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007926000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007928000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792a000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792c000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000792e000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:41152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007930000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:41160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007932000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.109 [2024-04-26 16:28:31.580327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007934000 len:0x1000 key:0x1ff0ef 00:18:48.109 [2024-04-26 16:28:31.580336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.110 [2024-04-26 16:28:31.580352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007936000 len:0x1000 key:0x1ff0ef 00:18:48.110 [2024-04-26 16:28:31.580362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.110 [2024-04-26 16:28:31.580373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007938000 len:0x1000 key:0x1ff0ef 00:18:48.110 [2024-04-26 16:28:31.580382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32602 cdw0:991e2620 sqhd:8530 p:0 m:0 dnr:0 00:18:48.110 [2024-04-26 16:28:31.593370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:48.110 [2024-04-26 16:28:31.593384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:48.110 [2024-04-26 16:28:31.593393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41192 len:8 PRP1 0x0 PRP2 0x0 00:18:48.110 [2024-04-26 16:28:31.593403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.110 [2024-04-26 16:28:31.593449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:48.110 [2024-04-26 16:28:31.593670] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:18:48.110 [2024-04-26 16:28:31.593683] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:48.110 [2024-04-26 16:28:31.593691] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:18:48.110 [2024-04-26 16:28:31.593708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:48.110 [2024-04-26 16:28:31.593718] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:48.110 [2024-04-26 16:28:31.593732] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:48.110 [2024-04-26 16:28:31.593741] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:48.110 [2024-04-26 16:28:31.593750] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:48.110 [2024-04-26 16:28:31.593768] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.110 [2024-04-26 16:28:31.593777] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:48.110 [2024-04-26 16:28:33.601285] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:48.110 [2024-04-26 16:28:33.601327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:18:48.110 [2024-04-26 16:28:33.601360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:48.110 [2024-04-26 16:28:33.601372] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:48.110 [2024-04-26 16:28:33.601385] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:48.110 [2024-04-26 16:28:33.601395] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:48.110 [2024-04-26 16:28:33.601406] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:48.110 [2024-04-26 16:28:33.601431] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.110 [2024-04-26 16:28:33.601440] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:48.110 [2024-04-26 16:28:35.609259] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:18:48.110 [2024-04-26 16:28:35.609289] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4300 00:18:48.110 [2024-04-26 16:28:35.609318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:18:48.110 [2024-04-26 16:28:35.609329] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] in failed state. 00:18:48.110 [2024-04-26 16:28:35.609434] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] Ctrlr is in error state 00:18:48.110 [2024-04-26 16:28:35.609447] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_mlx_0_1] controller reinitialization failed 00:18:48.110 [2024-04-26 16:28:35.609458] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] already in failed state 00:18:48.110 [2024-04-26 16:28:35.609504] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.110 [2024-04-26 16:28:35.609514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_mlx_0_1] resetting controller 00:18:48.110 [2024-04-26 16:28:36.655119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:48.110 00:18:48.110 Latency(us) 00:18:48.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.110 Job: Nvme_mlx_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:48.110 Verification LBA range: start 0x0 length 0x8000 00:18:48.110 Nvme_mlx_0_0n1 : 90.01 10071.12 39.34 0.00 0.00 12686.31 2122.80 7061019.60 00:18:48.110 Job: Nvme_mlx_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:48.110 Verification LBA range: start 0x0 length 0x8000 00:18:48.110 Nvme_mlx_0_1n1 : 90.01 9597.70 37.49 0.00 0.00 13312.63 2322.25 7061019.60 00:18:48.110 =================================================================================================================== 00:18:48.110 Total : 19668.82 76.83 0.00 0.00 12991.93 2122.80 7061019.60 00:18:48.110 Received shutdown signal, test time was about 90.000000 seconds 00:18:48.110 00:18:48.110 Latency(us) 00:18:48.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.110 =================================================================================================================== 00:18:48.110 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.110 16:29:50 -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:18:48.110 16:29:50 -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/try.txt 00:18:48.110 16:29:50 -- target/device_removal.sh@202 -- # killprocess 485848 00:18:48.110 16:29:50 -- common/autotest_common.sh@936 -- # '[' -z 485848 ']' 00:18:48.110 16:29:50 -- common/autotest_common.sh@940 -- # kill -0 485848 00:18:48.110 16:29:50 -- common/autotest_common.sh@941 -- # uname 00:18:48.110 16:29:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:48.110 16:29:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 485848 00:18:48.110 16:29:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:48.110 16:29:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:48.110 16:29:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 485848' 00:18:48.110 killing process with pid 485848 00:18:48.110 16:29:50 -- common/autotest_common.sh@955 -- # kill 485848 00:18:48.110 16:29:50 -- common/autotest_common.sh@960 -- # wait 485848 00:18:48.110 [2024-04-26 16:29:50.965114] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:18:48.110 16:29:51 -- target/device_removal.sh@203 -- # nvmfpid= 00:18:48.110 16:29:51 -- target/device_removal.sh@205 -- # return 0 00:18:48.110 00:18:48.110 real 1m33.353s 00:18:48.110 user 4m33.129s 00:18:48.110 sys 0m3.737s 00:18:48.110 16:29:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.110 16:29:51 -- common/autotest_common.sh@10 -- # set +x 00:18:48.110 ************************************ 00:18:48.110 END TEST nvmf_device_removal_pci_remove 00:18:48.110 ************************************ 00:18:48.110 16:29:51 -- target/device_removal.sh@317 -- # nvmftestfini 00:18:48.110 16:29:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:48.110 16:29:51 -- nvmf/common.sh@117 -- # sync 00:18:48.110 16:29:51 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:48.110 16:29:51 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:48.110 16:29:51 -- nvmf/common.sh@120 -- # set +e 00:18:48.110 16:29:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.110 16:29:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:48.110 rmmod nvme_rdma 00:18:48.110 rmmod nvme_fabrics 00:18:48.110 16:29:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.110 16:29:51 -- nvmf/common.sh@124 -- # set -e 00:18:48.110 16:29:51 -- nvmf/common.sh@125 -- # return 0 00:18:48.110 16:29:51 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:18:48.110 16:29:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:48.110 16:29:51 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:18:48.110 16:29:51 -- target/device_removal.sh@318 -- # clean_bond_device 00:18:48.110 16:29:51 -- target/device_removal.sh@240 -- # grep bond_nvmf 00:18:48.110 16:29:51 -- target/device_removal.sh@240 -- # ip link 00:18:48.110 00:18:48.110 real 3m13.958s 00:18:48.110 user 9m8.175s 00:18:48.110 sys 0m12.790s 00:18:48.110 16:29:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:48.110 16:29:51 -- common/autotest_common.sh@10 -- # set +x 00:18:48.110 ************************************ 00:18:48.110 END TEST nvmf_device_removal 00:18:48.110 ************************************ 00:18:48.110 16:29:51 -- nvmf/nvmf.sh@79 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:48.110 16:29:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:48.110 16:29:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.110 16:29:51 -- common/autotest_common.sh@10 -- # set +x 00:18:48.110 ************************************ 00:18:48.110 START TEST nvmf_srq_overwhelm 00:18:48.110 ************************************ 00:18:48.111 16:29:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:48.111 * Looking for test storage... 00:18:48.111 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:48.111 16:29:51 -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.111 16:29:51 -- nvmf/common.sh@7 -- # uname -s 00:18:48.111 16:29:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.111 16:29:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.111 16:29:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.111 16:29:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.111 16:29:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.111 16:29:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.111 16:29:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.111 16:29:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.111 16:29:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.111 16:29:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.111 16:29:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:18:48.111 16:29:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:18:48.111 16:29:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.111 16:29:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.111 16:29:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.111 16:29:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.111 16:29:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:48.111 16:29:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.111 16:29:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.111 16:29:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.111 16:29:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.111 16:29:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.111 16:29:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.111 16:29:51 -- paths/export.sh@5 -- # export PATH 00:18:48.111 16:29:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.111 16:29:51 -- nvmf/common.sh@47 -- # : 0 00:18:48.111 16:29:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.111 16:29:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.111 16:29:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.111 16:29:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.111 16:29:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.111 16:29:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.111 16:29:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.111 16:29:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.111 16:29:51 -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.111 16:29:51 -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.111 16:29:51 -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:18:48.111 16:29:51 -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:18:48.111 16:29:51 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:18:48.111 16:29:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.111 16:29:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:48.111 16:29:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:48.111 16:29:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:48.111 16:29:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.111 16:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.111 16:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.111 16:29:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:48.111 16:29:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:48.111 16:29:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.111 16:29:51 -- common/autotest_common.sh@10 -- # set +x 00:18:48.111 16:29:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:48.111 16:29:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:48.111 16:29:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:48.111 16:29:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:48.111 16:29:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:48.111 16:29:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:48.111 16:29:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:48.111 16:29:57 -- nvmf/common.sh@295 -- # net_devs=() 00:18:48.111 16:29:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:48.111 16:29:57 -- nvmf/common.sh@296 -- # e810=() 00:18:48.111 16:29:57 -- nvmf/common.sh@296 -- # local -ga e810 00:18:48.111 16:29:57 -- nvmf/common.sh@297 -- # x722=() 00:18:48.111 16:29:57 -- nvmf/common.sh@297 -- # local -ga x722 00:18:48.111 16:29:57 -- nvmf/common.sh@298 -- # mlx=() 00:18:48.111 16:29:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:48.111 16:29:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.111 16:29:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:48.111 16:29:57 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:48.111 16:29:57 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:48.111 16:29:57 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:48.111 16:29:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:48.111 16:29:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.111 16:29:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:18:48.111 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:18:48.111 16:29:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:48.111 16:29:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:48.111 16:29:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:18:48.111 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:18:48.111 16:29:57 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:48.111 16:29:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:48.111 16:29:57 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:48.111 16:29:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.111 16:29:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.111 16:29:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:48.111 16:29:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.111 16:29:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:48.111 Found net devices under 0000:18:00.0: mlx_0_0 00:18:48.111 16:29:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.111 16:29:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:48.111 16:29:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.111 16:29:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:48.112 16:29:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.112 16:29:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:48.112 Found net devices under 0000:18:00.1: mlx_0_1 00:18:48.112 16:29:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.112 16:29:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:48.112 16:29:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:48.112 16:29:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:48.112 16:29:57 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:18:48.112 16:29:57 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:18:48.112 16:29:57 -- nvmf/common.sh@409 -- # rdma_device_init 00:18:48.112 16:29:57 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:18:48.112 16:29:57 -- nvmf/common.sh@58 -- # uname 00:18:48.112 16:29:57 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:48.112 16:29:57 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:48.112 16:29:57 -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:48.112 16:29:57 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:48.112 16:29:57 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:48.112 16:29:57 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:48.112 16:29:57 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:48.112 16:29:57 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:48.112 16:29:57 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:18:48.112 16:29:57 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:48.112 16:29:57 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:48.112 16:29:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:48.112 16:29:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:48.112 16:29:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:48.112 16:29:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:48.112 16:29:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:48.112 16:29:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:48.112 16:29:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.112 16:29:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:48.112 16:29:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:48.112 16:29:57 -- nvmf/common.sh@105 -- # continue 2 00:18:48.112 16:29:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:48.112 16:29:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.112 16:29:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:48.112 16:29:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.112 16:29:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:48.112 16:29:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:48.112 16:29:57 -- nvmf/common.sh@105 -- # continue 2 00:18:48.112 16:29:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:48.112 16:29:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:48.112 16:29:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:48.112 16:29:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:48.112 16:29:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:48.112 16:29:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:48.371 16:29:57 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:48.371 16:29:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:48.371 16:29:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:48.371 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:48.371 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:18:48.371 altname enp24s0f0np0 00:18:48.371 altname ens785f0np0 00:18:48.371 inet 192.168.100.8/24 scope global mlx_0_0 00:18:48.371 valid_lft forever preferred_lft forever 00:18:48.371 16:29:57 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:48.371 16:29:57 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:48.371 16:29:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:48.371 16:29:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:48.371 16:29:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:48.371 16:29:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:48.371 16:29:57 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:48.371 16:29:57 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:48.371 16:29:57 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:48.371 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:48.371 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:18:48.371 altname enp24s0f1np1 00:18:48.371 altname ens785f1np1 00:18:48.371 inet 192.168.100.9/24 scope global mlx_0_1 00:18:48.371 valid_lft forever preferred_lft forever 00:18:48.371 16:29:57 -- nvmf/common.sh@411 -- # return 0 00:18:48.371 16:29:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:48.371 16:29:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:48.371 16:29:57 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:18:48.371 16:29:57 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:18:48.371 16:29:57 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:48.371 16:29:57 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:48.371 16:29:57 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:48.371 16:29:57 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:48.371 16:29:57 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:48.371 16:29:57 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:48.371 16:29:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:48.371 16:29:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.371 16:29:57 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:48.371 16:29:57 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:48.371 16:29:57 -- nvmf/common.sh@105 -- # continue 2 00:18:48.371 16:29:57 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:48.371 16:29:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.371 16:29:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:48.371 16:29:57 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:48.371 16:29:57 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:48.371 16:29:57 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:48.371 16:29:57 -- nvmf/common.sh@105 -- # continue 2 00:18:48.371 16:29:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:48.371 16:29:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:48.371 16:29:57 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:48.371 16:29:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:48.372 16:29:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:48.372 16:29:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:48.372 16:29:57 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:48.372 16:29:57 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:48.372 16:29:57 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:48.372 16:29:57 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:48.372 16:29:57 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:48.372 16:29:57 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:48.372 16:29:57 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:18:48.372 192.168.100.9' 00:18:48.372 16:29:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:48.372 192.168.100.9' 00:18:48.372 16:29:57 -- nvmf/common.sh@446 -- # head -n 1 00:18:48.372 16:29:57 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:48.372 16:29:57 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:18:48.372 192.168.100.9' 00:18:48.372 16:29:57 -- nvmf/common.sh@447 -- # tail -n +2 00:18:48.372 16:29:57 -- nvmf/common.sh@447 -- # head -n 1 00:18:48.372 16:29:57 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:48.372 16:29:57 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:18:48.372 16:29:57 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:48.372 16:29:57 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:18:48.372 16:29:57 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:18:48.372 16:29:57 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:18:48.372 16:29:57 -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:18:48.372 16:29:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:48.372 16:29:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:48.372 16:29:57 -- common/autotest_common.sh@10 -- # set +x 00:18:48.372 16:29:57 -- nvmf/common.sh@470 -- # nvmfpid=501014 00:18:48.372 16:29:57 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:48.372 16:29:57 -- nvmf/common.sh@471 -- # waitforlisten 501014 00:18:48.372 16:29:57 -- common/autotest_common.sh@817 -- # '[' -z 501014 ']' 00:18:48.372 16:29:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.372 16:29:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:48.372 16:29:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.372 16:29:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:48.372 16:29:57 -- common/autotest_common.sh@10 -- # set +x 00:18:48.372 [2024-04-26 16:29:57.280575] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:18:48.372 [2024-04-26 16:29:57.280633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.372 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.372 [2024-04-26 16:29:57.353628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.631 [2024-04-26 16:29:57.434723] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.631 [2024-04-26 16:29:57.434768] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.631 [2024-04-26 16:29:57.434777] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.631 [2024-04-26 16:29:57.434801] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.631 [2024-04-26 16:29:57.434808] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.631 [2024-04-26 16:29:57.434876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.631 [2024-04-26 16:29:57.434961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.631 [2024-04-26 16:29:57.435025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.631 [2024-04-26 16:29:57.435026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.198 16:29:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:49.198 16:29:58 -- common/autotest_common.sh@850 -- # return 0 00:18:49.198 16:29:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:49.198 16:29:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:49.198 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:49.198 16:29:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.198 16:29:58 -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:18:49.198 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.198 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:49.198 [2024-04-26 16:29:58.169572] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x57e310/0x582800) succeed. 00:18:49.198 [2024-04-26 16:29:58.179855] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x57f950/0x5c3e90) succeed. 00:18:49.198 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.198 16:29:58 -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:18:49.458 16:29:58 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:49.458 16:29:58 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:18:49.458 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.458 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:49.458 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.458 16:29:58 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:49.458 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.458 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:49.458 Malloc0 00:18:49.458 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.458 16:29:58 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:49.458 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.458 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:49.458 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.458 16:29:58 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:49.458 16:29:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.458 16:29:58 -- common/autotest_common.sh@10 -- # set +x 00:18:49.458 [2024-04-26 16:29:58.278542] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:49.458 16:29:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.458 16:29:58 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:18:51.363 16:29:59 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:18:51.363 16:29:59 -- common/autotest_common.sh@1221 -- # local i=0 00:18:51.363 16:29:59 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:18:51.363 16:29:59 -- common/autotest_common.sh@1222 -- # grep -q -w nvme0n1 00:18:51.363 16:29:59 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:18:51.363 16:29:59 -- common/autotest_common.sh@1228 -- # grep -q -w nvme0n1 00:18:51.363 16:29:59 -- common/autotest_common.sh@1232 -- # return 0 00:18:51.363 16:29:59 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:51.363 16:29:59 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.363 16:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.363 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.363 16:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.363 16:29:59 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:51.363 16:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.363 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.363 Malloc1 00:18:51.363 16:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.363 16:29:59 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:51.363 16:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.363 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.363 16:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.363 16:29:59 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:51.363 16:29:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:51.363 16:29:59 -- common/autotest_common.sh@10 -- # set +x 00:18:51.363 16:29:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:51.363 16:29:59 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:52.737 16:30:01 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:18:52.737 16:30:01 -- common/autotest_common.sh@1221 -- # local i=0 00:18:52.737 16:30:01 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:18:52.737 16:30:01 -- common/autotest_common.sh@1222 -- # grep -q -w nvme1n1 00:18:52.737 16:30:01 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:18:52.737 16:30:01 -- common/autotest_common.sh@1228 -- # grep -q -w nvme1n1 00:18:52.737 16:30:01 -- common/autotest_common.sh@1232 -- # return 0 00:18:52.737 16:30:01 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:52.737 16:30:01 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:52.738 16:30:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.738 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:18:52.738 16:30:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.738 16:30:01 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:52.738 16:30:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.738 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:18:52.738 Malloc2 00:18:52.738 16:30:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.738 16:30:01 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:52.738 16:30:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.738 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:18:52.738 16:30:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.738 16:30:01 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:18:52.738 16:30:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:52.738 16:30:01 -- common/autotest_common.sh@10 -- # set +x 00:18:52.738 16:30:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:52.738 16:30:01 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:18:54.111 16:30:03 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:18:54.111 16:30:03 -- common/autotest_common.sh@1221 -- # local i=0 00:18:54.111 16:30:03 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:18:54.111 16:30:03 -- common/autotest_common.sh@1222 -- # grep -q -w nvme2n1 00:18:54.369 16:30:03 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:18:54.369 16:30:03 -- common/autotest_common.sh@1228 -- # grep -q -w nvme2n1 00:18:54.369 16:30:03 -- common/autotest_common.sh@1232 -- # return 0 00:18:54.369 16:30:03 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:54.369 16:30:03 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:18:54.369 16:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.369 16:30:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.369 16:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.369 16:30:03 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:54.369 16:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.369 16:30:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.369 Malloc3 00:18:54.369 16:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.369 16:30:03 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:54.369 16:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.369 16:30:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.369 16:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.369 16:30:03 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:18:54.369 16:30:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:54.369 16:30:03 -- common/autotest_common.sh@10 -- # set +x 00:18:54.369 16:30:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:54.369 16:30:03 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:18:55.743 16:30:04 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:18:55.743 16:30:04 -- common/autotest_common.sh@1221 -- # local i=0 00:18:55.743 16:30:04 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:18:55.743 16:30:04 -- common/autotest_common.sh@1222 -- # grep -q -w nvme3n1 00:18:56.002 16:30:04 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:18:56.002 16:30:04 -- common/autotest_common.sh@1228 -- # grep -q -w nvme3n1 00:18:56.002 16:30:04 -- common/autotest_common.sh@1232 -- # return 0 00:18:56.002 16:30:04 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:56.002 16:30:04 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:18:56.002 16:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.002 16:30:04 -- common/autotest_common.sh@10 -- # set +x 00:18:56.002 16:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.002 16:30:04 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:56.002 16:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.002 16:30:04 -- common/autotest_common.sh@10 -- # set +x 00:18:56.002 Malloc4 00:18:56.002 16:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.002 16:30:04 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:56.002 16:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.002 16:30:04 -- common/autotest_common.sh@10 -- # set +x 00:18:56.002 16:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.002 16:30:04 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:18:56.002 16:30:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:56.002 16:30:04 -- common/autotest_common.sh@10 -- # set +x 00:18:56.002 16:30:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:56.002 16:30:04 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:18:57.389 16:30:06 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:18:57.389 16:30:06 -- common/autotest_common.sh@1221 -- # local i=0 00:18:57.389 16:30:06 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:18:57.389 16:30:06 -- common/autotest_common.sh@1222 -- # grep -q -w nvme4n1 00:18:57.389 16:30:06 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:18:57.389 16:30:06 -- common/autotest_common.sh@1228 -- # grep -q -w nvme4n1 00:18:57.647 16:30:06 -- common/autotest_common.sh@1232 -- # return 0 00:18:57.647 16:30:06 -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:57.647 16:30:06 -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:18:57.647 16:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.647 16:30:06 -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 16:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.647 16:30:06 -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:57.647 16:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.647 16:30:06 -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 Malloc5 00:18:57.647 16:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.647 16:30:06 -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:57.647 16:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.647 16:30:06 -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 16:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.647 16:30:06 -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:18:57.647 16:30:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:57.647 16:30:06 -- common/autotest_common.sh@10 -- # set +x 00:18:57.647 16:30:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:57.647 16:30:06 -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:18:59.023 16:30:08 -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:18:59.023 16:30:08 -- common/autotest_common.sh@1221 -- # local i=0 00:18:59.024 16:30:08 -- common/autotest_common.sh@1222 -- # lsblk -l -o NAME 00:18:59.024 16:30:08 -- common/autotest_common.sh@1222 -- # grep -q -w nvme5n1 00:18:59.024 16:30:08 -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME 00:18:59.024 16:30:08 -- common/autotest_common.sh@1228 -- # grep -q -w nvme5n1 00:18:59.024 16:30:08 -- common/autotest_common.sh@1232 -- # return 0 00:18:59.024 16:30:08 -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:18:59.282 [global] 00:18:59.282 thread=1 00:18:59.282 invalidate=1 00:18:59.282 rw=read 00:18:59.282 time_based=1 00:18:59.282 runtime=10 00:18:59.282 ioengine=libaio 00:18:59.282 direct=1 00:18:59.282 bs=1048576 00:18:59.282 iodepth=128 00:18:59.282 norandommap=1 00:18:59.282 numjobs=13 00:18:59.282 00:18:59.282 [job0] 00:18:59.282 filename=/dev/nvme0n1 00:18:59.282 [job1] 00:18:59.282 filename=/dev/nvme1n1 00:18:59.282 [job2] 00:18:59.282 filename=/dev/nvme2n1 00:18:59.282 [job3] 00:18:59.282 filename=/dev/nvme3n1 00:18:59.282 [job4] 00:18:59.282 filename=/dev/nvme4n1 00:18:59.282 [job5] 00:18:59.282 filename=/dev/nvme5n1 00:18:59.282 Could not set queue depth (nvme0n1) 00:18:59.282 Could not set queue depth (nvme1n1) 00:18:59.282 Could not set queue depth (nvme2n1) 00:18:59.282 Could not set queue depth (nvme3n1) 00:18:59.282 Could not set queue depth (nvme4n1) 00:18:59.282 Could not set queue depth (nvme5n1) 00:18:59.542 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:59.542 ... 00:18:59.542 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:59.542 ... 00:18:59.542 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:59.542 ... 00:18:59.542 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:59.542 ... 00:18:59.542 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:59.542 ... 00:18:59.542 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:59.542 ... 00:18:59.542 fio-3.35 00:18:59.542 Starting 78 threads 00:19:14.444 00:19:14.444 job0: (groupid=0, jobs=1): err= 0: pid=503242: Fri Apr 26 16:30:21 2024 00:19:14.444 read: IOPS=4, BW=4340KiB/s (4444kB/s)(43.0MiB/10145msec) 00:19:14.444 slat (usec): min=927, max=2114.8k, avg=233293.05, stdev=629823.68 00:19:14.444 clat (msec): min=112, max=10143, avg=7449.89, stdev=2991.53 00:19:14.444 lat (msec): min=2186, max=10144, avg=7683.19, stdev=2790.11 00:19:14.444 clat percentiles (msec): 00:19:14.444 | 1.00th=[ 113], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4396], 00:19:14.444 | 30.00th=[ 6544], 40.00th=[ 6611], 50.00th=[ 8658], 60.00th=[ 8658], 00:19:14.444 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:19:14.444 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.444 | 99.99th=[10134] 00:19:14.444 lat (msec) : 250=2.33%, >=2000=97.67% 00:19:14.444 cpu : usr=0.00%, sys=0.46%, ctx=60, majf=0, minf=11009 00:19:14.444 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:19:14.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.445 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503243: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=2, BW=2330KiB/s (2386kB/s)(23.0MiB/10108msec) 00:19:14.445 slat (usec): min=886, max=2119.8k, avg=434910.54, stdev=822822.64 00:19:14.445 clat (msec): min=103, max=10104, avg=5903.58, stdev=3319.56 00:19:14.445 lat (msec): min=113, max=10106, avg=6338.49, stdev=3177.42 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 105], 5.00th=[ 114], 10.00th=[ 2198], 20.00th=[ 2232], 00:19:14.445 | 30.00th=[ 4396], 40.00th=[ 4396], 50.00th=[ 6544], 60.00th=[ 6544], 00:19:14.445 | 70.00th=[ 8658], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:19:14.445 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.445 | 99.99th=[10134] 00:19:14.445 lat (msec) : 250=8.70%, >=2000=91.30% 00:19:14.445 cpu : usr=0.00%, sys=0.22%, ctx=48, majf=0, minf=5889 00:19:14.445 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.445 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503244: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=10, BW=10.6MiB/s (11.1MB/s)(107MiB/10137msec) 00:19:14.445 slat (usec): min=409, max=2136.1k, avg=93529.36, stdev=401707.20 00:19:14.445 clat (msec): min=128, max=10127, avg=3133.27, stdev=2477.00 00:19:14.445 lat (msec): min=1954, max=10136, avg=3226.80, stdev=2550.33 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 1955], 5.00th=[ 1955], 10.00th=[ 1989], 20.00th=[ 2005], 00:19:14.445 | 30.00th=[ 2039], 40.00th=[ 2089], 50.00th=[ 2106], 60.00th=[ 2140], 00:19:14.445 | 70.00th=[ 2265], 80.00th=[ 2265], 90.00th=[ 8658], 95.00th=[10000], 00:19:14.445 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.445 | 99.99th=[10134] 00:19:14.445 lat (msec) : 250=0.93%, 2000=16.82%, >=2000=82.24% 00:19:14.445 cpu : usr=0.00%, sys=0.77%, ctx=86, majf=0, minf=27393 00:19:14.445 IO depths : 1=0.9%, 2=1.9%, 4=3.7%, 8=7.5%, 16=15.0%, 32=29.9%, >=64=41.1% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:14.445 issued rwts: total=107,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503245: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=1, BW=1933KiB/s (1980kB/s)(23.0MiB/12183msec) 00:19:14.445 slat (usec): min=946, max=2139.0k, avg=435409.44, stdev=829135.24 00:19:14.445 clat (msec): min=2167, max=12177, avg=10042.21, stdev=3143.60 00:19:14.445 lat (msec): min=4275, max=12182, avg=10477.62, stdev=2659.68 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6477], 00:19:14.445 | 30.00th=[ 8658], 40.00th=[10805], 50.00th=[12013], 60.00th=[12147], 00:19:14.445 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.445 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.445 | 99.99th=[12147] 00:19:14.445 lat (msec) : >=2000=100.00% 00:19:14.445 cpu : usr=0.00%, sys=0.19%, ctx=43, majf=0, minf=5889 00:19:14.445 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.445 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503246: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=4, BW=4331KiB/s (4435kB/s)(52.0MiB/12295msec) 00:19:14.445 slat (usec): min=913, max=2122.3k, avg=195020.40, stdev=584743.51 00:19:14.445 clat (msec): min=2153, max=12293, avg=11025.14, stdev=2479.52 00:19:14.445 lat (msec): min=4262, max=12294, avg=11220.16, stdev=2144.21 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 6409], 20.00th=[10671], 00:19:14.445 | 30.00th=[12013], 40.00th=[12147], 50.00th=[12147], 60.00th=[12281], 00:19:14.445 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:14.445 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:14.445 | 99.99th=[12281] 00:19:14.445 lat (msec) : >=2000=100.00% 00:19:14.445 cpu : usr=0.00%, sys=0.42%, ctx=95, majf=0, minf=13313 00:19:14.445 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.445 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503247: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=5, BW=5627KiB/s (5762kB/s)(56.0MiB/10191msec) 00:19:14.445 slat (usec): min=615, max=2097.9k, avg=179667.62, stdev=555709.59 00:19:14.445 clat (msec): min=128, max=10189, avg=7626.24, stdev=3337.62 00:19:14.445 lat (msec): min=2180, max=10190, avg=7805.90, stdev=3194.42 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 129], 5.00th=[ 2198], 10.00th=[ 2198], 20.00th=[ 4329], 00:19:14.445 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10000], 60.00th=[10134], 00:19:14.445 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:19:14.445 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.445 | 99.99th=[10134] 00:19:14.445 lat (msec) : 250=1.79%, >=2000=98.21% 00:19:14.445 cpu : usr=0.01%, sys=0.53%, ctx=109, majf=0, minf=14337 00:19:14.445 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.445 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503248: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=3, BW=4019KiB/s (4116kB/s)(48.0MiB/12229msec) 00:19:14.445 slat (usec): min=606, max=2088.1k, avg=209340.12, stdev=598961.41 00:19:14.445 clat (msec): min=2180, max=12227, avg=9648.57, stdev=3042.51 00:19:14.445 lat (msec): min=4264, max=12228, avg=9857.91, stdev=2857.81 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 2165], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6409], 00:19:14.445 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12013], 00:19:14.445 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:19:14.445 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:14.445 | 99.99th=[12281] 00:19:14.445 lat (msec) : >=2000=100.00% 00:19:14.445 cpu : usr=0.01%, sys=0.24%, ctx=65, majf=0, minf=12289 00:19:14.445 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.445 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503249: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=3, BW=3947KiB/s (4042kB/s)(39.0MiB/10117msec) 00:19:14.445 slat (usec): min=827, max=2122.8k, avg=256872.50, stdev=664097.03 00:19:14.445 clat (msec): min=98, max=10114, avg=6279.47, stdev=3419.97 00:19:14.445 lat (msec): min=118, max=10116, avg=6536.35, stdev=3318.19 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 99], 5.00th=[ 118], 10.00th=[ 2198], 20.00th=[ 2232], 00:19:14.445 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 6544], 60.00th=[ 6544], 00:19:14.445 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:19:14.445 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.445 | 99.99th=[10134] 00:19:14.445 lat (msec) : 100=2.56%, 250=5.13%, >=2000=92.31% 00:19:14.445 cpu : usr=0.00%, sys=0.35%, ctx=57, majf=0, minf=9985 00:19:14.445 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.445 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503250: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=2, BW=2696KiB/s (2761kB/s)(32.0MiB/12154msec) 00:19:14.445 slat (usec): min=924, max=4308.9k, avg=379259.09, stdev=1105654.23 00:19:14.445 clat (msec): min=17, max=12152, avg=11116.25, stdev=2591.16 00:19:14.445 lat (msec): min=4229, max=12153, avg=11495.51, stdev=1620.61 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 17], 5.00th=[ 4245], 10.00th=[ 8557], 20.00th=[10671], 00:19:14.445 | 30.00th=[12013], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:19:14.445 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.445 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.445 | 99.99th=[12147] 00:19:14.445 lat (msec) : 20=3.12%, >=2000=96.88% 00:19:14.445 cpu : usr=0.00%, sys=0.22%, ctx=47, majf=0, minf=8193 00:19:14.445 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.445 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503251: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=1, BW=1688KiB/s (1728kB/s)(20.0MiB/12136msec) 00:19:14.445 slat (usec): min=927, max=4220.2k, avg=605769.82, stdev=1161043.09 00:19:14.445 clat (msec): min=20, max=12134, avg=9294.94, stdev=3554.01 00:19:14.445 lat (msec): min=4240, max=12135, avg=9900.71, stdev=2853.40 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 21], 5.00th=[ 21], 10.00th=[ 4245], 20.00th=[ 6409], 00:19:14.445 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12013], 00:19:14.445 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.445 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.445 | 99.99th=[12147] 00:19:14.445 lat (msec) : 50=5.00%, >=2000=95.00% 00:19:14.445 cpu : usr=0.01%, sys=0.12%, ctx=41, majf=0, minf=5121 00:19:14.445 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.445 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503252: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=13, BW=13.7MiB/s (14.4MB/s)(167MiB/12177msec) 00:19:14.445 slat (usec): min=117, max=2136.1k, avg=60212.98, stdev=320096.28 00:19:14.445 clat (msec): min=952, max=11840, avg=8862.78, stdev=3913.53 00:19:14.445 lat (msec): min=954, max=11844, avg=8922.99, stdev=3882.03 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 953], 5.00th=[ 1435], 10.00th=[ 1435], 20.00th=[ 4245], 00:19:14.445 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[11208], 60.00th=[11342], 00:19:14.445 | 70.00th=[11342], 80.00th=[11476], 90.00th=[11610], 95.00th=[11745], 00:19:14.445 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:19:14.445 | 99.99th=[11879] 00:19:14.445 bw ( KiB/s): min= 1957, max=32768, per=0.38%, avg=11688.00, stdev=10304.09, samples=7 00:19:14.445 iops : min= 1, max= 32, avg=11.14, stdev=10.30, samples=7 00:19:14.445 lat (msec) : 1000=1.80%, 2000=14.37%, >=2000=83.83% 00:19:14.445 cpu : usr=0.02%, sys=0.83%, ctx=173, majf=0, minf=32769 00:19:14.445 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:19:14.445 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503253: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=19, BW=19.4MiB/s (20.3MB/s)(196MiB/10108msec) 00:19:14.445 slat (usec): min=63, max=2109.1k, avg=51027.48, stdev=278021.60 00:19:14.445 clat (msec): min=104, max=8744, avg=5817.69, stdev=3089.41 00:19:14.445 lat (msec): min=112, max=9482, avg=5868.72, stdev=3068.12 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 113], 5.00th=[ 1485], 10.00th=[ 1502], 20.00th=[ 1603], 00:19:14.445 | 30.00th=[ 2232], 40.00th=[ 6477], 50.00th=[ 8087], 60.00th=[ 8154], 00:19:14.445 | 70.00th=[ 8288], 80.00th=[ 8423], 90.00th=[ 8490], 95.00th=[ 8658], 00:19:14.445 | 99.00th=[ 8658], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:19:14.445 | 99.99th=[ 8792] 00:19:14.445 bw ( KiB/s): min= 2048, max=63488, per=0.66%, avg=20156.43, stdev=21707.14, samples=7 00:19:14.445 iops : min= 2, max= 62, avg=19.43, stdev=21.34, samples=7 00:19:14.445 lat (msec) : 250=2.04%, 2000=24.49%, >=2000=73.47% 00:19:14.445 cpu : usr=0.01%, sys=1.14%, ctx=214, majf=0, minf=32769 00:19:14.445 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.2%, 32=16.3%, >=64=67.9% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:19:14.445 issued rwts: total=196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job0: (groupid=0, jobs=1): err= 0: pid=503254: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=0, BW=675KiB/s (691kB/s)(8192KiB/12132msec) 00:19:14.445 slat (msec): min=2, max=4293, avg=1514.27, stdev=1863.68 00:19:14.445 clat (msec): min=16, max=12021, avg=8352.65, stdev=4215.10 00:19:14.445 lat (msec): min=4250, max=12131, avg=9866.93, stdev=2694.41 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 17], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 4245], 00:19:14.445 | 30.00th=[ 8557], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:19:14.445 | 70.00th=[10671], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:19:14.445 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:14.445 | 99.99th=[12013] 00:19:14.445 lat (msec) : 20=12.50%, >=2000=87.50% 00:19:14.445 cpu : usr=0.00%, sys=0.06%, ctx=33, majf=0, minf=2049 00:19:14.445 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 issued rwts: total=8,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job1: (groupid=0, jobs=1): err= 0: pid=503255: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=4, BW=4409KiB/s (4515kB/s)(44.0MiB/10218msec) 00:19:14.445 slat (usec): min=932, max=2090.9k, avg=229075.13, stdev=620704.99 00:19:14.445 clat (msec): min=138, max=10213, avg=8420.41, stdev=2847.56 00:19:14.445 lat (msec): min=2201, max=10217, avg=8649.49, stdev=2556.32 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 138], 5.00th=[ 2232], 10.00th=[ 4329], 20.00th=[ 6477], 00:19:14.445 | 30.00th=[ 8658], 40.00th=[10134], 50.00th=[10134], 60.00th=[10134], 00:19:14.445 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:19:14.445 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:14.445 | 99.99th=[10268] 00:19:14.445 lat (msec) : 250=2.27%, >=2000=97.73% 00:19:14.445 cpu : usr=0.01%, sys=0.47%, ctx=80, majf=0, minf=11265 00:19:14.445 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.445 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job1: (groupid=0, jobs=1): err= 0: pid=503256: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=3, BW=3635KiB/s (3722kB/s)(36.0MiB/10141msec) 00:19:14.445 slat (usec): min=1213, max=2090.8k, avg=277855.44, stdev=678391.76 00:19:14.445 clat (msec): min=137, max=10134, avg=7340.87, stdev=3081.78 00:19:14.445 lat (msec): min=2190, max=10140, avg=7618.72, stdev=2856.50 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 138], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 4396], 00:19:14.445 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10000], 00:19:14.445 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:19:14.445 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.445 | 99.99th=[10134] 00:19:14.445 lat (msec) : 250=2.78%, >=2000=97.22% 00:19:14.445 cpu : usr=0.00%, sys=0.40%, ctx=64, majf=0, minf=9217 00:19:14.445 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.445 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.445 job1: (groupid=0, jobs=1): err= 0: pid=503257: Fri Apr 26 16:30:21 2024 00:19:14.445 read: IOPS=7, BW=7406KiB/s (7583kB/s)(73.0MiB/10094msec) 00:19:14.445 slat (usec): min=740, max=2096.4k, avg=137044.68, stdev=492942.23 00:19:14.445 clat (msec): min=88, max=10090, avg=5914.03, stdev=3656.02 00:19:14.445 lat (msec): min=96, max=10093, avg=6051.07, stdev=3621.99 00:19:14.445 clat percentiles (msec): 00:19:14.445 | 1.00th=[ 89], 5.00th=[ 112], 10.00th=[ 140], 20.00th=[ 2232], 00:19:14.445 | 30.00th=[ 2265], 40.00th=[ 4396], 50.00th=[ 6544], 60.00th=[ 8658], 00:19:14.445 | 70.00th=[ 8658], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:19:14.445 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.445 | 99.99th=[10134] 00:19:14.445 lat (msec) : 100=2.74%, 250=10.96%, >=2000=86.30% 00:19:14.445 cpu : usr=0.02%, sys=0.67%, ctx=67, majf=0, minf=18689 00:19:14.445 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.0%, 16=21.9%, 32=43.8%, >=64=13.7% 00:19:14.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.445 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:14.445 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.445 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503258: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=1, BW=1265KiB/s (1296kB/s)(15.0MiB/12139msec) 00:19:14.446 slat (usec): min=918, max=4261.0k, avg=668238.22, stdev=1253832.72 00:19:14.446 clat (msec): min=2114, max=12026, avg=7490.67, stdev=3143.74 00:19:14.446 lat (msec): min=4226, max=12138, avg=8158.91, stdev=2980.44 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:19:14.446 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 6409], 00:19:14.446 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[12013], 00:19:14.446 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:14.446 | 99.99th=[12013] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.00%, sys=0.10%, ctx=34, majf=0, minf=3841 00:19:14.446 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503259: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=4, BW=5011KiB/s (5131kB/s)(60.0MiB/12261msec) 00:19:14.446 slat (usec): min=801, max=2074.9k, avg=167842.95, stdev=538957.18 00:19:14.446 clat (msec): min=2189, max=12259, avg=9996.38, stdev=3023.12 00:19:14.446 lat (msec): min=4257, max=12260, avg=10164.22, stdev=2857.39 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2198], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6477], 00:19:14.446 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12147], 60.00th=[12147], 00:19:14.446 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:14.446 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:14.446 | 99.99th=[12281] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.01%, sys=0.51%, ctx=84, majf=0, minf=15361 00:19:14.446 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.446 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503260: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=4, BW=4342KiB/s (4446kB/s)(52.0MiB/12264msec) 00:19:14.446 slat (usec): min=912, max=2133.2k, avg=194094.53, stdev=583593.29 00:19:14.446 clat (msec): min=2170, max=12262, avg=10604.43, stdev=2609.68 00:19:14.446 lat (msec): min=4271, max=12263, avg=10798.52, stdev=2330.52 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2165], 5.00th=[ 4329], 10.00th=[ 6409], 20.00th=[ 8557], 00:19:14.446 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:19:14.446 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:19:14.446 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:14.446 | 99.99th=[12281] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.00%, sys=0.43%, ctx=77, majf=0, minf=13313 00:19:14.446 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.446 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503261: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=26, BW=26.3MiB/s (27.6MB/s)(269MiB/10222msec) 00:19:14.446 slat (usec): min=90, max=2096.2k, avg=37531.25, stdev=248987.66 00:19:14.446 clat (msec): min=123, max=9227, avg=4613.63, stdev=3738.89 00:19:14.446 lat (msec): min=659, max=9229, avg=4651.16, stdev=3736.33 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 659], 5.00th=[ 667], 10.00th=[ 701], 20.00th=[ 793], 00:19:14.446 | 30.00th=[ 911], 40.00th=[ 969], 50.00th=[ 4329], 60.00th=[ 7013], 00:19:14.446 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9194], 00:19:14.446 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:19:14.446 | 99.99th=[ 9194] 00:19:14.446 bw ( KiB/s): min= 6144, max=147456, per=1.35%, avg=41252.57, stdev=49257.54, samples=7 00:19:14.446 iops : min= 6, max= 144, avg=40.29, stdev=48.10, samples=7 00:19:14.446 lat (msec) : 250=0.37%, 750=13.01%, 1000=29.37%, 2000=0.37%, >=2000=56.88% 00:19:14.446 cpu : usr=0.01%, sys=1.32%, ctx=265, majf=0, minf=32769 00:19:14.446 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=3.0%, 16=5.9%, 32=11.9%, >=64=76.6% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:19:14.446 issued rwts: total=269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503262: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=2, BW=2350KiB/s (2406kB/s)(28.0MiB/12203msec) 00:19:14.446 slat (usec): min=927, max=2149.9k, avg=360185.27, stdev=769558.78 00:19:14.446 clat (msec): min=2117, max=12200, avg=9805.08, stdev=3449.98 00:19:14.446 lat (msec): min=4219, max=12202, avg=10165.26, stdev=3129.21 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:19:14.446 | 30.00th=[ 8557], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:19:14.446 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.446 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.446 | 99.99th=[12147] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.00%, sys=0.23%, ctx=62, majf=0, minf=7169 00:19:14.446 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.446 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503263: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=1, BW=1941KiB/s (1987kB/s)(23.0MiB/12136msec) 00:19:14.446 slat (usec): min=807, max=2142.7k, avg=434778.81, stdev=824950.54 00:19:14.446 clat (msec): min=2135, max=12116, avg=7626.91, stdev=4080.29 00:19:14.446 lat (msec): min=2142, max=12135, avg=8061.69, stdev=4000.53 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2140], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4212], 00:19:14.446 | 30.00th=[ 4245], 40.00th=[ 4329], 50.00th=[ 8557], 60.00th=[10671], 00:19:14.446 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:19:14.446 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.446 | 99.99th=[12147] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.01%, sys=0.16%, ctx=62, majf=0, minf=5889 00:19:14.446 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.446 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503264: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=3, BW=3331KiB/s (3411kB/s)(40.0MiB/12297msec) 00:19:14.446 slat (usec): min=1012, max=2162.1k, avg=253511.07, stdev=664895.33 00:19:14.446 clat (msec): min=2155, max=12295, avg=11070.58, stdev=2713.70 00:19:14.446 lat (msec): min=4258, max=12296, avg=11324.09, stdev=2301.96 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[10805], 00:19:14.446 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12147], 60.00th=[12281], 00:19:14.446 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:14.446 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:14.446 | 99.99th=[12281] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.00%, sys=0.37%, ctx=80, majf=0, minf=10241 00:19:14.446 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.446 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503265: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=2, BW=2533KiB/s (2593kB/s)(30.0MiB/12130msec) 00:19:14.446 slat (usec): min=971, max=2086.3k, avg=333387.65, stdev=736347.95 00:19:14.446 clat (msec): min=2128, max=10767, avg=6733.85, stdev=3020.09 00:19:14.446 lat (msec): min=2143, max=12129, avg=7067.24, stdev=3046.07 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2123], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4279], 00:19:14.446 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8557], 00:19:14.446 | 70.00th=[ 8658], 80.00th=[ 8658], 90.00th=[10671], 95.00th=[10805], 00:19:14.446 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:14.446 | 99.99th=[10805] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.00%, sys=0.24%, ctx=48, majf=0, minf=7681 00:19:14.446 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.446 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503266: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=1, BW=1266KiB/s (1296kB/s)(15.0MiB/12136msec) 00:19:14.446 slat (usec): min=1038, max=4241.9k, avg=668138.04, stdev=1257434.27 00:19:14.446 clat (msec): min=2113, max=12119, avg=8550.53, stdev=4054.12 00:19:14.446 lat (msec): min=2143, max=12135, avg=9218.66, stdev=3730.34 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 2140], 20.00th=[ 4245], 00:19:14.446 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[10671], 60.00th=[10805], 00:19:14.446 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:19:14.446 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.446 | 99.99th=[12147] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.00%, sys=0.12%, ctx=51, majf=0, minf=3841 00:19:14.446 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job1: (groupid=0, jobs=1): err= 0: pid=503267: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=4, BW=4217KiB/s (4318kB/s)(50.0MiB/12141msec) 00:19:14.446 slat (usec): min=503, max=2136.0k, avg=200530.29, stdev=584502.62 00:19:14.446 clat (msec): min=2113, max=12107, avg=10456.28, stdev=2777.13 00:19:14.446 lat (msec): min=2157, max=12140, avg=10656.81, stdev=2511.72 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2106], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 8557], 00:19:14.446 | 30.00th=[11745], 40.00th=[11745], 50.00th=[11879], 60.00th=[11879], 00:19:14.446 | 70.00th=[11879], 80.00th=[12013], 90.00th=[12013], 95.00th=[12147], 00:19:14.446 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.446 | 99.99th=[12147] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.00%, sys=0.30%, ctx=113, majf=0, minf=12801 00:19:14.446 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.446 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job2: (groupid=0, jobs=1): err= 0: pid=503268: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=8, BW=9200KiB/s (9421kB/s)(109MiB/12132msec) 00:19:14.446 slat (usec): min=412, max=2118.6k, avg=91783.18, stdev=400947.93 00:19:14.446 clat (msec): min=2126, max=12114, avg=11197.41, stdev=1612.25 00:19:14.446 lat (msec): min=2167, max=12131, avg=11289.20, stdev=1355.39 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2165], 5.00th=[ 8557], 10.00th=[10671], 20.00th=[10805], 00:19:14.446 | 30.00th=[11476], 40.00th=[11610], 50.00th=[11610], 60.00th=[11745], 00:19:14.446 | 70.00th=[11745], 80.00th=[11879], 90.00th=[11879], 95.00th=[12013], 00:19:14.446 | 99.00th=[12013], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.446 | 99.99th=[12147] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.01%, sys=0.55%, ctx=170, majf=0, minf=27905 00:19:14.446 IO depths : 1=0.9%, 2=1.8%, 4=3.7%, 8=7.3%, 16=14.7%, 32=29.4%, >=64=42.2% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:14.446 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job2: (groupid=0, jobs=1): err= 0: pid=503269: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=73, BW=74.0MiB/s (77.5MB/s)(747MiB/10101msec) 00:19:14.446 slat (usec): min=44, max=2096.3k, avg=13387.52, stdev=139193.85 00:19:14.446 clat (msec): min=95, max=6530, avg=901.99, stdev=1304.42 00:19:14.446 lat (msec): min=103, max=6539, avg=915.38, stdev=1327.61 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 146], 5.00th=[ 255], 10.00th=[ 257], 20.00th=[ 262], 00:19:14.446 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 266], 60.00th=[ 268], 00:19:14.446 | 70.00th=[ 300], 80.00th=[ 2232], 90.00th=[ 2836], 95.00th=[ 2937], 00:19:14.446 | 99.00th=[ 6007], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:19:14.446 | 99.99th=[ 6544] 00:19:14.446 bw ( KiB/s): min=20398, max=464896, per=8.33%, avg=253935.60, stdev=213481.60, samples=5 00:19:14.446 iops : min= 19, max= 454, avg=247.80, stdev=208.73, samples=5 00:19:14.446 lat (msec) : 100=0.13%, 250=1.20%, 500=75.50%, 750=1.20%, 1000=0.94% 00:19:14.446 lat (msec) : >=2000=21.02% 00:19:14.446 cpu : usr=0.02%, sys=1.37%, ctx=731, majf=0, minf=32769 00:19:14.446 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:14.446 issued rwts: total=747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job2: (groupid=0, jobs=1): err= 0: pid=503270: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=3, BW=3911KiB/s (4005kB/s)(39.0MiB/10211msec) 00:19:14.446 slat (usec): min=769, max=2167.1k, avg=258561.13, stdev=659074.80 00:19:14.446 clat (msec): min=126, max=10209, avg=8571.87, stdev=2998.75 00:19:14.446 lat (msec): min=2182, max=10210, avg=8830.43, stdev=2667.90 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 127], 5.00th=[ 2198], 10.00th=[ 2232], 20.00th=[ 6544], 00:19:14.446 | 30.00th=[ 9866], 40.00th=[10000], 50.00th=[10134], 60.00th=[10134], 00:19:14.446 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:19:14.446 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:19:14.446 | 99.99th=[10268] 00:19:14.446 lat (msec) : 250=2.56%, >=2000=97.44% 00:19:14.446 cpu : usr=0.00%, sys=0.42%, ctx=89, majf=0, minf=9985 00:19:14.446 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.446 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job2: (groupid=0, jobs=1): err= 0: pid=503271: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=2, BW=2930KiB/s (3000kB/s)(35.0MiB/12234msec) 00:19:14.446 slat (usec): min=1041, max=2074.3k, avg=287571.40, stdev=679792.82 00:19:14.446 clat (msec): min=2168, max=12232, avg=9804.56, stdev=3135.66 00:19:14.446 lat (msec): min=4240, max=12233, avg=10092.13, stdev=2864.54 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:14.446 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147], 00:19:14.446 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:14.446 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:14.446 | 99.99th=[12281] 00:19:14.446 lat (msec) : >=2000=100.00% 00:19:14.446 cpu : usr=0.00%, sys=0.31%, ctx=84, majf=0, minf=8961 00:19:14.446 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.446 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job2: (groupid=0, jobs=1): err= 0: pid=503272: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=2, BW=2730KiB/s (2795kB/s)(27.0MiB/10129msec) 00:19:14.446 slat (usec): min=882, max=2089.2k, avg=370689.44, stdev=757627.83 00:19:14.446 clat (msec): min=119, max=10075, avg=5510.65, stdev=3095.74 00:19:14.446 lat (msec): min=133, max=10128, avg=5881.34, stdev=3023.80 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 121], 5.00th=[ 134], 10.00th=[ 2198], 20.00th=[ 2265], 00:19:14.446 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 4396], 60.00th=[ 6544], 00:19:14.446 | 70.00th=[ 6544], 80.00th=[ 8792], 90.00th=[10000], 95.00th=[10000], 00:19:14.446 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.446 | 99.99th=[10134] 00:19:14.446 lat (msec) : 250=7.41%, >=2000=92.59% 00:19:14.446 cpu : usr=0.01%, sys=0.25%, ctx=83, majf=0, minf=6913 00:19:14.446 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.446 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job2: (groupid=0, jobs=1): err= 0: pid=503273: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=77, BW=77.6MiB/s (81.3MB/s)(942MiB/12145msec) 00:19:14.446 slat (usec): min=48, max=2080.8k, avg=10620.12, stdev=96344.74 00:19:14.446 clat (msec): min=507, max=7528, avg=1587.72, stdev=2155.66 00:19:14.446 lat (msec): min=523, max=7531, avg=1598.34, stdev=2162.54 00:19:14.446 clat percentiles (msec): 00:19:14.446 | 1.00th=[ 523], 5.00th=[ 542], 10.00th=[ 542], 20.00th=[ 550], 00:19:14.446 | 30.00th=[ 592], 40.00th=[ 609], 50.00th=[ 642], 60.00th=[ 835], 00:19:14.446 | 70.00th=[ 894], 80.00th=[ 1183], 90.00th=[ 6879], 95.00th=[ 7215], 00:19:14.446 | 99.00th=[ 7483], 99.50th=[ 7483], 99.90th=[ 7550], 99.95th=[ 7550], 00:19:14.446 | 99.99th=[ 7550] 00:19:14.446 bw ( KiB/s): min= 6131, max=251904, per=4.21%, avg=128348.54, stdev=84261.81, samples=13 00:19:14.446 iops : min= 5, max= 246, avg=125.15, stdev=82.46, samples=13 00:19:14.446 lat (msec) : 750=54.35%, 1000=18.58%, 2000=12.74%, >=2000=14.33% 00:19:14.446 cpu : usr=0.07%, sys=1.17%, ctx=1466, majf=0, minf=32769 00:19:14.446 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.3% 00:19:14.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.446 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.446 issued rwts: total=942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.446 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.446 job2: (groupid=0, jobs=1): err= 0: pid=503274: Fri Apr 26 16:30:21 2024 00:19:14.446 read: IOPS=79, BW=79.4MiB/s (83.2MB/s)(965MiB/12159msec) 00:19:14.447 slat (usec): min=45, max=2115.3k, avg=10409.40, stdev=96218.16 00:19:14.447 clat (msec): min=554, max=7007, avg=1549.92, stdev=1965.35 00:19:14.447 lat (msec): min=557, max=7014, avg=1560.33, stdev=1971.13 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 558], 5.00th=[ 617], 10.00th=[ 634], 20.00th=[ 667], 00:19:14.447 | 30.00th=[ 701], 40.00th=[ 743], 50.00th=[ 776], 60.00th=[ 810], 00:19:14.447 | 70.00th=[ 869], 80.00th=[ 1036], 90.00th=[ 6544], 95.00th=[ 6678], 00:19:14.447 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 7013], 99.95th=[ 7013], 00:19:14.447 | 99.99th=[ 7013] 00:19:14.447 bw ( KiB/s): min= 1889, max=196608, per=4.33%, avg=131986.38, stdev=65200.19, samples=13 00:19:14.447 iops : min= 1, max= 192, avg=128.77, stdev=63.82, samples=13 00:19:14.447 lat (msec) : 750=43.32%, 1000=32.02%, 2000=10.67%, >=2000=13.99% 00:19:14.447 cpu : usr=0.04%, sys=1.66%, ctx=1227, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.5% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.447 issued rwts: total=965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job2: (groupid=0, jobs=1): err= 0: pid=503275: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=4, BW=4681KiB/s (4793kB/s)(56.0MiB/12250msec) 00:19:14.447 slat (usec): min=916, max=2114.2k, avg=180330.39, stdev=563368.51 00:19:14.447 clat (msec): min=2150, max=12248, avg=11089.21, stdev=2532.27 00:19:14.447 lat (msec): min=4243, max=12249, avg=11269.54, stdev=2225.10 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[12013], 00:19:14.447 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:19:14.447 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:19:14.447 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:19:14.447 | 99.99th=[12281] 00:19:14.447 lat (msec) : >=2000=100.00% 00:19:14.447 cpu : usr=0.00%, sys=0.51%, ctx=91, majf=0, minf=14337 00:19:14.447 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.447 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job2: (groupid=0, jobs=1): err= 0: pid=503276: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=15, BW=15.7MiB/s (16.4MB/s)(190MiB/12139msec) 00:19:14.447 slat (usec): min=60, max=2081.3k, avg=53241.02, stdev=295653.22 00:19:14.447 clat (msec): min=2022, max=10669, avg=5500.42, stdev=1927.05 00:19:14.447 lat (msec): min=2146, max=10691, avg=5553.66, stdev=1953.29 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4044], 20.00th=[ 4111], 00:19:14.447 | 30.00th=[ 4178], 40.00th=[ 4245], 50.00th=[ 4329], 60.00th=[ 6208], 00:19:14.447 | 70.00th=[ 6275], 80.00th=[ 7886], 90.00th=[ 7953], 95.00th=[ 8020], 00:19:14.447 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:14.447 | 99.99th=[10671] 00:19:14.447 bw ( KiB/s): min= 1957, max=61440, per=1.05%, avg=31903.00, stdev=24290.97, samples=4 00:19:14.447 iops : min= 1, max= 60, avg=30.75, stdev=24.10, samples=4 00:19:14.447 lat (msec) : >=2000=100.00% 00:19:14.447 cpu : usr=0.00%, sys=0.86%, ctx=208, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.4%, 32=16.8%, >=64=66.8% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:19:14.447 issued rwts: total=190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job2: (groupid=0, jobs=1): err= 0: pid=503277: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=18, BW=18.2MiB/s (19.1MB/s)(221MiB/12141msec) 00:19:14.447 slat (usec): min=414, max=2184.2k, avg=45361.12, stdev=280724.14 00:19:14.447 clat (msec): min=511, max=11593, avg=6738.04, stdev=5112.66 00:19:14.447 lat (msec): min=517, max=11596, avg=6783.40, stdev=5110.76 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 518], 5.00th=[ 535], 10.00th=[ 558], 20.00th=[ 625], 00:19:14.447 | 30.00th=[ 676], 40.00th=[ 3138], 50.00th=[10939], 60.00th=[11073], 00:19:14.447 | 70.00th=[11208], 80.00th=[11476], 90.00th=[11476], 95.00th=[11610], 00:19:14.447 | 99.00th=[11610], 99.50th=[11610], 99.90th=[11610], 99.95th=[11610], 00:19:14.447 | 99.99th=[11610] 00:19:14.447 bw ( KiB/s): min= 1957, max=100352, per=0.90%, avg=27471.43, stdev=41436.57, samples=7 00:19:14.447 iops : min= 1, max= 98, avg=26.43, stdev=40.74, samples=7 00:19:14.447 lat (msec) : 750=38.01%, 1000=0.45%, >=2000=61.54% 00:19:14.447 cpu : usr=0.03%, sys=0.65%, ctx=446, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.2%, 32=14.5%, >=64=71.5% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:19:14.447 issued rwts: total=221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job2: (groupid=0, jobs=1): err= 0: pid=503278: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=3, BW=3188KiB/s (3265kB/s)(38.0MiB/12204msec) 00:19:14.447 slat (usec): min=869, max=2158.0k, avg=264839.52, stdev=675566.02 00:19:14.447 clat (msec): min=2138, max=12202, avg=11223.00, stdev=2295.15 00:19:14.447 lat (msec): min=4297, max=12202, avg=11487.84, stdev=1729.56 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 2140], 5.00th=[ 4329], 10.00th=[ 8490], 20.00th=[10671], 00:19:14.447 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:19:14.447 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.447 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.447 | 99.99th=[12147] 00:19:14.447 lat (msec) : >=2000=100.00% 00:19:14.447 cpu : usr=0.00%, sys=0.32%, ctx=77, majf=0, minf=9729 00:19:14.447 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.447 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job2: (groupid=0, jobs=1): err= 0: pid=503279: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=23, BW=23.1MiB/s (24.3MB/s)(283MiB/12233msec) 00:19:14.447 slat (usec): min=61, max=4212.3k, avg=35747.43, stdev=306062.36 00:19:14.447 clat (msec): min=406, max=11343, avg=5328.14, stdev=5173.43 00:19:14.447 lat (msec): min=409, max=11346, avg=5363.89, stdev=5179.46 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 409], 5.00th=[ 409], 10.00th=[ 426], 20.00th=[ 489], 00:19:14.447 | 30.00th=[ 600], 40.00th=[ 810], 50.00th=[ 835], 60.00th=[10939], 00:19:14.447 | 70.00th=[11073], 80.00th=[11208], 90.00th=[11342], 95.00th=[11342], 00:19:14.447 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:19:14.447 | 99.99th=[11342] 00:19:14.447 bw ( KiB/s): min= 1954, max=161792, per=1.50%, avg=45627.71, stdev=73259.00, samples=7 00:19:14.447 iops : min= 1, max= 158, avg=44.43, stdev=71.63, samples=7 00:19:14.447 lat (msec) : 500=21.20%, 750=14.13%, 1000=18.02%, >=2000=46.64% 00:19:14.447 cpu : usr=0.00%, sys=0.92%, ctx=373, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.7%, 32=11.3%, >=64=77.7% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:14.447 issued rwts: total=283,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job2: (groupid=0, jobs=1): err= 0: pid=503280: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=2, BW=2939KiB/s (3010kB/s)(29.0MiB/10103msec) 00:19:14.447 slat (usec): min=1109, max=2090.1k, avg=344990.77, stdev=735592.14 00:19:14.447 clat (msec): min=97, max=10088, avg=5950.84, stdev=3759.41 00:19:14.447 lat (msec): min=106, max=10102, avg=6295.83, stdev=3660.85 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 99], 5.00th=[ 107], 10.00th=[ 108], 20.00th=[ 2198], 00:19:14.447 | 30.00th=[ 4329], 40.00th=[ 4396], 50.00th=[ 6544], 60.00th=[ 8658], 00:19:14.447 | 70.00th=[ 8792], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:19:14.447 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.447 | 99.99th=[10134] 00:19:14.447 lat (msec) : 100=3.45%, 250=13.79%, >=2000=82.76% 00:19:14.447 cpu : usr=0.00%, sys=0.32%, ctx=89, majf=0, minf=7425 00:19:14.447 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:14.447 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job3: (groupid=0, jobs=1): err= 0: pid=503281: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=72, BW=72.9MiB/s (76.4MB/s)(884MiB/12133msec) 00:19:14.447 slat (usec): min=35, max=2103.7k, avg=11319.85, stdev=121382.27 00:19:14.447 clat (msec): min=120, max=10723, avg=1617.61, stdev=2816.08 00:19:14.447 lat (msec): min=120, max=10867, avg=1628.93, stdev=2825.77 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 126], 5.00th=[ 129], 10.00th=[ 153], 20.00th=[ 266], 00:19:14.447 | 30.00th=[ 300], 40.00th=[ 330], 50.00th=[ 376], 60.00th=[ 443], 00:19:14.447 | 70.00th=[ 506], 80.00th=[ 1301], 90.00th=[ 8658], 95.00th=[ 8658], 00:19:14.447 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[10671], 99.95th=[10671], 00:19:14.447 | 99.99th=[10671] 00:19:14.447 bw ( KiB/s): min= 7907, max=620544, per=6.35%, avg=193753.75, stdev=228985.07, samples=8 00:19:14.447 iops : min= 7, max= 606, avg=189.00, stdev=223.81, samples=8 00:19:14.447 lat (msec) : 250=16.74%, 500=53.17%, 750=2.26%, 1000=3.17%, 2000=8.14% 00:19:14.447 lat (msec) : >=2000=16.52% 00:19:14.447 cpu : usr=0.02%, sys=1.14%, ctx=1340, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.447 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job3: (groupid=0, jobs=1): err= 0: pid=503282: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=13, BW=13.5MiB/s (14.2MB/s)(137MiB/10137msec) 00:19:14.447 slat (usec): min=141, max=2071.0k, avg=73018.54, stdev=344920.52 00:19:14.447 clat (msec): min=132, max=9857, avg=8152.00, stdev=2553.42 00:19:14.447 lat (msec): min=140, max=9859, avg=8225.02, stdev=2462.35 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 140], 5.00th=[ 2198], 10.00th=[ 2265], 20.00th=[ 7819], 00:19:14.447 | 30.00th=[ 8792], 40.00th=[ 8926], 50.00th=[ 9194], 60.00th=[ 9329], 00:19:14.447 | 70.00th=[ 9463], 80.00th=[ 9597], 90.00th=[ 9731], 95.00th=[ 9731], 00:19:14.447 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:19:14.447 | 99.99th=[ 9866] 00:19:14.447 bw ( KiB/s): min= 7585, max=12288, per=0.33%, avg=9936.50, stdev=3325.52, samples=2 00:19:14.447 iops : min= 7, max= 12, avg= 9.50, stdev= 3.54, samples=2 00:19:14.447 lat (msec) : 250=2.92%, >=2000=97.08% 00:19:14.447 cpu : usr=0.00%, sys=0.89%, ctx=422, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.8%, 16=11.7%, 32=23.4%, >=64=54.0% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=90.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=9.1% 00:19:14.447 issued rwts: total=137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job3: (groupid=0, jobs=1): err= 0: pid=503283: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=125, BW=125MiB/s (131MB/s)(1523MiB/12173msec) 00:19:14.447 slat (usec): min=47, max=2112.9k, avg=6603.38, stdev=61948.32 00:19:14.447 clat (msec): min=266, max=5706, avg=981.37, stdev=1400.13 00:19:14.447 lat (msec): min=267, max=5707, avg=987.98, stdev=1404.06 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 268], 5.00th=[ 271], 10.00th=[ 271], 20.00th=[ 284], 00:19:14.447 | 30.00th=[ 422], 40.00th=[ 430], 50.00th=[ 477], 60.00th=[ 634], 00:19:14.447 | 70.00th=[ 860], 80.00th=[ 961], 90.00th=[ 1116], 95.00th=[ 5537], 00:19:14.447 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5738], 99.95th=[ 5738], 00:19:14.447 | 99.99th=[ 5738] 00:19:14.447 bw ( KiB/s): min= 1838, max=471040, per=6.24%, avg=190462.47, stdev=143998.18, samples=15 00:19:14.447 iops : min= 1, max= 460, avg=185.87, stdev=140.65, samples=15 00:19:14.447 lat (msec) : 500=52.40%, 750=12.34%, 1000=16.55%, 2000=10.31%, >=2000=8.40% 00:19:14.447 cpu : usr=0.05%, sys=1.68%, ctx=1986, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.9% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.447 issued rwts: total=1523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job3: (groupid=0, jobs=1): err= 0: pid=503284: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=56, BW=56.3MiB/s (59.1MB/s)(571MiB/10137msec) 00:19:14.447 slat (usec): min=45, max=2070.6k, avg=17517.52, stdev=155532.62 00:19:14.447 clat (msec): min=131, max=8355, avg=1301.15, stdev=2333.42 00:19:14.447 lat (msec): min=136, max=8363, avg=1318.67, stdev=2353.76 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 157], 5.00th=[ 230], 10.00th=[ 347], 20.00th=[ 477], 00:19:14.447 | 30.00th=[ 477], 40.00th=[ 481], 50.00th=[ 485], 60.00th=[ 485], 00:19:14.447 | 70.00th=[ 498], 80.00th=[ 506], 90.00th=[ 4866], 95.00th=[ 8288], 00:19:14.447 | 99.00th=[ 8356], 99.50th=[ 8356], 99.90th=[ 8356], 99.95th=[ 8356], 00:19:14.447 | 99.99th=[ 8356] 00:19:14.447 bw ( KiB/s): min=122880, max=272384, per=7.45%, avg=227328.00, stdev=70012.45, samples=4 00:19:14.447 iops : min= 120, max= 266, avg=222.00, stdev=68.37, samples=4 00:19:14.447 lat (msec) : 250=5.25%, 500=68.83%, 750=13.13%, >=2000=12.78% 00:19:14.447 cpu : usr=0.04%, sys=1.42%, ctx=575, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:14.447 issued rwts: total=571,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job3: (groupid=0, jobs=1): err= 0: pid=503285: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=4, BW=4726KiB/s (4839kB/s)(56.0MiB/12135msec) 00:19:14.447 slat (usec): min=703, max=2049.0k, avg=178613.18, stdev=553295.92 00:19:14.447 clat (msec): min=2131, max=12132, avg=8201.80, stdev=3419.16 00:19:14.447 lat (msec): min=2142, max=12134, avg=8380.41, stdev=3357.01 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4212], 20.00th=[ 4279], 00:19:14.447 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10671], 00:19:14.447 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.447 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.447 | 99.99th=[12147] 00:19:14.447 lat (msec) : >=2000=100.00% 00:19:14.447 cpu : usr=0.00%, sys=0.43%, ctx=68, majf=0, minf=14337 00:19:14.447 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.447 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job3: (groupid=0, jobs=1): err= 0: pid=503286: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=100, BW=101MiB/s (106MB/s)(1228MiB/12170msec) 00:19:14.447 slat (usec): min=44, max=2125.2k, avg=8187.41, stdev=99657.22 00:19:14.447 clat (msec): min=125, max=8405, avg=1202.66, stdev=2385.83 00:19:14.447 lat (msec): min=127, max=8405, avg=1210.85, stdev=2393.59 00:19:14.447 clat percentiles (msec): 00:19:14.447 | 1.00th=[ 129], 5.00th=[ 130], 10.00th=[ 257], 20.00th=[ 271], 00:19:14.447 | 30.00th=[ 317], 40.00th=[ 347], 50.00th=[ 380], 60.00th=[ 430], 00:19:14.447 | 70.00th=[ 477], 80.00th=[ 531], 90.00th=[ 6409], 95.00th=[ 8356], 00:19:14.447 | 99.00th=[ 8423], 99.50th=[ 8423], 99.90th=[ 8423], 99.95th=[ 8423], 00:19:14.447 | 99.99th=[ 8423] 00:19:14.447 bw ( KiB/s): min= 1858, max=481280, per=7.39%, avg=225462.50, stdev=171739.78, samples=10 00:19:14.447 iops : min= 1, max= 470, avg=220.00, stdev=167.96, samples=10 00:19:14.447 lat (msec) : 250=8.39%, 500=68.49%, 750=11.32%, 2000=0.16%, >=2000=11.64% 00:19:14.447 cpu : usr=0.02%, sys=1.41%, ctx=1511, majf=0, minf=32769 00:19:14.447 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:19:14.447 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.447 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.447 issued rwts: total=1228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.447 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.447 job3: (groupid=0, jobs=1): err= 0: pid=503287: Fri Apr 26 16:30:21 2024 00:19:14.447 read: IOPS=5, BW=5125KiB/s (5248kB/s)(61.0MiB/12189msec) 00:19:14.447 slat (usec): min=603, max=2074.6k, avg=164339.35, stdev=530193.05 00:19:14.447 clat (msec): min=2163, max=12183, avg=9897.76, stdev=3145.45 00:19:14.447 lat (msec): min=4207, max=12188, avg=10062.10, stdev=2992.81 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:14.448 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12013], 60.00th=[12013], 00:19:14.448 | 70.00th=[12013], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.448 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.448 | 99.99th=[12147] 00:19:14.448 lat (msec) : >=2000=100.00% 00:19:14.448 cpu : usr=0.00%, sys=0.43%, ctx=80, majf=0, minf=15617 00:19:14.448 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.448 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job3: (groupid=0, jobs=1): err= 0: pid=503288: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=3, BW=3366KiB/s (3447kB/s)(40.0MiB/12169msec) 00:19:14.448 slat (usec): min=892, max=2052.3k, avg=250343.11, stdev=639606.60 00:19:14.448 clat (msec): min=2154, max=12166, avg=9543.06, stdev=3210.13 00:19:14.448 lat (msec): min=4200, max=12167, avg=9793.40, stdev=3002.90 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:14.448 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12013], 00:19:14.448 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.448 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.448 | 99.99th=[12147] 00:19:14.448 lat (msec) : >=2000=100.00% 00:19:14.448 cpu : usr=0.00%, sys=0.31%, ctx=69, majf=0, minf=10241 00:19:14.448 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.448 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job3: (groupid=0, jobs=1): err= 0: pid=503289: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=70, BW=70.8MiB/s (74.2MB/s)(720MiB/10174msec) 00:19:14.448 slat (usec): min=45, max=2063.5k, avg=13932.44, stdev=131612.66 00:19:14.448 clat (msec): min=138, max=6823, avg=1683.58, stdev=2200.11 00:19:14.448 lat (msec): min=255, max=6824, avg=1697.51, stdev=2205.78 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 275], 5.00th=[ 288], 10.00th=[ 305], 20.00th=[ 405], 00:19:14.448 | 30.00th=[ 514], 40.00th=[ 575], 50.00th=[ 642], 60.00th=[ 835], 00:19:14.448 | 70.00th=[ 1083], 80.00th=[ 2198], 90.00th=[ 6678], 95.00th=[ 6745], 00:19:14.448 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:19:14.448 | 99.99th=[ 6812] 00:19:14.448 bw ( KiB/s): min=12288, max=405504, per=4.42%, avg=134707.89, stdev=134093.11, samples=9 00:19:14.448 iops : min= 12, max= 396, avg=131.44, stdev=131.05, samples=9 00:19:14.448 lat (msec) : 250=0.14%, 500=27.36%, 750=29.17%, 1000=10.28%, 2000=12.08% 00:19:14.448 lat (msec) : >=2000=20.97% 00:19:14.448 cpu : usr=0.00%, sys=1.32%, ctx=870, majf=0, minf=32769 00:19:14.448 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:14.448 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job3: (groupid=0, jobs=1): err= 0: pid=503290: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=5, BW=5296KiB/s (5423kB/s)(63.0MiB/12181msec) 00:19:14.448 slat (usec): min=915, max=2116.4k, avg=159884.02, stdev=532692.58 00:19:14.448 clat (msec): min=2107, max=12178, avg=9773.05, stdev=3167.80 00:19:14.448 lat (msec): min=4193, max=12180, avg=9932.93, stdev=3025.68 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:14.448 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147], 00:19:14.448 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.448 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.448 | 99.99th=[12147] 00:19:14.448 lat (msec) : >=2000=100.00% 00:19:14.448 cpu : usr=0.01%, sys=0.50%, ctx=85, majf=0, minf=16129 00:19:14.448 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.448 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job3: (groupid=0, jobs=1): err= 0: pid=503291: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=3, BW=3788KiB/s (3879kB/s)(45.0MiB/12164msec) 00:19:14.448 slat (usec): min=986, max=2076.0k, avg=222318.37, stdev=613242.10 00:19:14.448 clat (msec): min=2159, max=12162, avg=10286.65, stdev=2848.23 00:19:14.448 lat (msec): min=4227, max=12163, avg=10508.97, stdev=2576.96 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 2165], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 6409], 00:19:14.448 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12013], 60.00th=[12147], 00:19:14.448 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.448 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.448 | 99.99th=[12147] 00:19:14.448 lat (msec) : >=2000=100.00% 00:19:14.448 cpu : usr=0.00%, sys=0.37%, ctx=84, majf=0, minf=11521 00:19:14.448 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.448 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job3: (groupid=0, jobs=1): err= 0: pid=503292: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=151, BW=152MiB/s (159MB/s)(1850MiB/12200msec) 00:19:14.448 slat (usec): min=44, max=2132.1k, avg=5424.29, stdev=69026.06 00:19:14.448 clat (msec): min=115, max=6610, avg=807.04, stdev=1581.02 00:19:14.448 lat (msec): min=115, max=6611, avg=812.46, stdev=1586.24 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 134], 00:19:14.448 | 30.00th=[ 136], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 330], 00:19:14.448 | 70.00th=[ 659], 80.00th=[ 919], 90.00th=[ 1183], 95.00th=[ 6544], 00:19:14.448 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:19:14.448 | 99.99th=[ 6611] 00:19:14.448 bw ( KiB/s): min= 2043, max=972800, per=8.90%, avg=271398.69, stdev=329052.71, samples=13 00:19:14.448 iops : min= 1, max= 950, avg=264.92, stdev=321.41, samples=13 00:19:14.448 lat (msec) : 250=59.03%, 500=2.97%, 750=12.27%, 1000=9.03%, 2000=9.57% 00:19:14.448 lat (msec) : >=2000=7.14% 00:19:14.448 cpu : usr=0.01%, sys=1.74%, ctx=2205, majf=0, minf=32769 00:19:14.448 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.448 issued rwts: total=1850,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job3: (groupid=0, jobs=1): err= 0: pid=503293: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=31, BW=31.5MiB/s (33.1MB/s)(320MiB/10144msec) 00:19:14.448 slat (usec): min=77, max=2046.6k, avg=31258.68, stdev=224370.42 00:19:14.448 clat (msec): min=139, max=9091, avg=3849.37, stdev=3507.83 00:19:14.448 lat (msec): min=144, max=9095, avg=3880.63, stdev=3512.31 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 518], 5.00th=[ 542], 10.00th=[ 567], 20.00th=[ 584], 00:19:14.448 | 30.00th=[ 609], 40.00th=[ 642], 50.00th=[ 2165], 60.00th=[ 4463], 00:19:14.448 | 70.00th=[ 7013], 80.00th=[ 8792], 90.00th=[ 8926], 95.00th=[ 9060], 00:19:14.448 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:19:14.448 | 99.99th=[ 9060] 00:19:14.448 bw ( KiB/s): min= 2048, max=159744, per=1.62%, avg=49336.50, stdev=57753.05, samples=8 00:19:14.448 iops : min= 2, max= 156, avg=48.00, stdev=56.53, samples=8 00:19:14.448 lat (msec) : 250=0.94%, 750=41.88%, 1000=0.62%, >=2000=56.56% 00:19:14.448 cpu : usr=0.00%, sys=0.91%, ctx=530, majf=0, minf=32769 00:19:14.448 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.3% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:14.448 issued rwts: total=320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503294: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=108, BW=109MiB/s (114MB/s)(1096MiB/10096msec) 00:19:14.448 slat (usec): min=42, max=2041.6k, avg=9123.34, stdev=94054.64 00:19:14.448 clat (msec): min=91, max=3955, avg=846.74, stdev=809.39 00:19:14.448 lat (msec): min=99, max=3956, avg=855.86, stdev=815.48 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 157], 5.00th=[ 409], 10.00th=[ 414], 20.00th=[ 422], 00:19:14.448 | 30.00th=[ 426], 40.00th=[ 443], 50.00th=[ 567], 60.00th=[ 634], 00:19:14.448 | 70.00th=[ 667], 80.00th=[ 701], 90.00th=[ 2467], 95.00th=[ 2601], 00:19:14.448 | 99.00th=[ 3943], 99.50th=[ 3943], 99.90th=[ 3943], 99.95th=[ 3943], 00:19:14.448 | 99.99th=[ 3943] 00:19:14.448 bw ( KiB/s): min=38912, max=315392, per=7.11%, avg=216810.89, stdev=85807.22, samples=9 00:19:14.448 iops : min= 38, max= 308, avg=211.67, stdev=83.79, samples=9 00:19:14.448 lat (msec) : 100=0.18%, 250=1.28%, 500=44.62%, 750=38.59%, 1000=0.36% 00:19:14.448 lat (msec) : >=2000=14.96% 00:19:14.448 cpu : usr=0.01%, sys=2.11%, ctx=987, majf=0, minf=32769 00:19:14.448 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.3% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.448 issued rwts: total=1096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503295: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=7, BW=8137KiB/s (8332kB/s)(96.0MiB/12081msec) 00:19:14.448 slat (usec): min=429, max=2046.3k, avg=104276.01, stdev=421945.38 00:19:14.448 clat (msec): min=2069, max=12079, avg=8176.05, stdev=3300.54 00:19:14.448 lat (msec): min=2098, max=12080, avg=8280.32, stdev=3263.51 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 2072], 5.00th=[ 4144], 10.00th=[ 4178], 20.00th=[ 4279], 00:19:14.448 | 30.00th=[ 6342], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[10671], 00:19:14.448 | 70.00th=[10805], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:19:14.448 | 99.00th=[12013], 99.50th=[12013], 99.90th=[12013], 99.95th=[12013], 00:19:14.448 | 99.99th=[12013] 00:19:14.448 lat (msec) : >=2000=100.00% 00:19:14.448 cpu : usr=0.00%, sys=0.70%, ctx=83, majf=0, minf=24577 00:19:14.448 IO depths : 1=1.0%, 2=2.1%, 4=4.2%, 8=8.3%, 16=16.7%, 32=33.3%, >=64=34.4% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:14.448 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503296: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=58, BW=58.6MiB/s (61.5MB/s)(590MiB/10064msec) 00:19:14.448 slat (usec): min=42, max=2104.8k, avg=17015.14, stdev=155834.89 00:19:14.448 clat (msec): min=20, max=8183, avg=899.00, stdev=1602.37 00:19:14.448 lat (msec): min=113, max=8184, avg=916.01, stdev=1629.88 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 116], 5.00th=[ 207], 10.00th=[ 321], 20.00th=[ 481], 00:19:14.448 | 30.00th=[ 481], 40.00th=[ 485], 50.00th=[ 489], 60.00th=[ 493], 00:19:14.448 | 70.00th=[ 510], 80.00th=[ 523], 90.00th=[ 567], 95.00th=[ 4732], 00:19:14.448 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:19:14.448 | 99.99th=[ 8154] 00:19:14.448 bw ( KiB/s): min=186368, max=260096, per=7.74%, avg=235945.75, stdev=34354.79, samples=4 00:19:14.448 iops : min= 182, max= 254, avg=230.25, stdev=33.53, samples=4 00:19:14.448 lat (msec) : 50=0.17%, 250=6.27%, 500=60.00%, 750=25.42%, >=2000=8.14% 00:19:14.448 cpu : usr=0.00%, sys=1.54%, ctx=508, majf=0, minf=32769 00:19:14.448 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:14.448 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503297: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=275, BW=276MiB/s (289MB/s)(2790MiB/10123msec) 00:19:14.448 slat (usec): min=43, max=2049.0k, avg=3588.05, stdev=45560.73 00:19:14.448 clat (msec): min=101, max=4202, avg=317.56, stdev=398.16 00:19:14.448 lat (msec): min=112, max=4204, avg=321.14, stdev=405.67 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 113], 5.00th=[ 113], 10.00th=[ 114], 20.00th=[ 115], 00:19:14.448 | 30.00th=[ 116], 40.00th=[ 116], 50.00th=[ 176], 60.00th=[ 284], 00:19:14.448 | 70.00th=[ 418], 80.00th=[ 443], 90.00th=[ 667], 95.00th=[ 684], 00:19:14.448 | 99.00th=[ 2802], 99.50th=[ 4144], 99.90th=[ 4212], 99.95th=[ 4212], 00:19:14.448 | 99.99th=[ 4212] 00:19:14.448 bw ( KiB/s): min=141312, max=1126400, per=14.86%, avg=453337.17, stdev=356825.58, samples=12 00:19:14.448 iops : min= 138, max= 1100, avg=442.58, stdev=348.38, samples=12 00:19:14.448 lat (msec) : 250=56.56%, 500=26.20%, 750=15.95%, 1000=0.25%, >=2000=1.04% 00:19:14.448 cpu : usr=0.16%, sys=2.82%, ctx=2664, majf=0, minf=32769 00:19:14.448 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.7% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.448 issued rwts: total=2790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503298: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=4, BW=4121KiB/s (4220kB/s)(49.0MiB/12175msec) 00:19:14.448 slat (usec): min=918, max=2129.7k, avg=205540.53, stdev=597486.98 00:19:14.448 clat (msec): min=2102, max=12173, avg=10025.17, stdev=3171.32 00:19:14.448 lat (msec): min=4171, max=12174, avg=10230.71, stdev=2966.95 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:19:14.448 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:19:14.448 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.448 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.448 | 99.99th=[12147] 00:19:14.448 lat (msec) : >=2000=100.00% 00:19:14.448 cpu : usr=0.00%, sys=0.39%, ctx=85, majf=0, minf=12545 00:19:14.448 IO depths : 1=2.0%, 2=4.1%, 4=8.2%, 8=16.3%, 16=32.7%, 32=36.7%, >=64=0.0% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.448 issued rwts: total=49,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503299: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=2, BW=2953KiB/s (3024kB/s)(35.0MiB/12138msec) 00:19:14.448 slat (usec): min=1049, max=2122.3k, avg=286656.58, stdev=690446.84 00:19:14.448 clat (msec): min=2104, max=12136, avg=8678.24, stdev=3336.15 00:19:14.448 lat (msec): min=4181, max=12137, avg=8964.89, stdev=3182.18 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 4279], 00:19:14.448 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[10671], 00:19:14.448 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:19:14.448 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.448 | 99.99th=[12147] 00:19:14.448 lat (msec) : >=2000=100.00% 00:19:14.448 cpu : usr=0.00%, sys=0.26%, ctx=76, majf=0, minf=8961 00:19:14.448 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.448 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503300: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=9, BW=9310KiB/s (9533kB/s)(111MiB/12209msec) 00:19:14.448 slat (usec): min=455, max=2061.8k, avg=90669.33, stdev=389614.45 00:19:14.448 clat (msec): min=2144, max=12207, avg=8749.32, stdev=3044.24 00:19:14.448 lat (msec): min=4172, max=12208, avg=8839.99, stdev=2995.21 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 4178], 5.00th=[ 4279], 10.00th=[ 6141], 20.00th=[ 6208], 00:19:14.448 | 30.00th=[ 6275], 40.00th=[ 6275], 50.00th=[ 8490], 60.00th=[10671], 00:19:14.448 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:19:14.448 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:19:14.448 | 99.99th=[12147] 00:19:14.448 lat (msec) : >=2000=100.00% 00:19:14.448 cpu : usr=0.00%, sys=0.72%, ctx=172, majf=0, minf=28417 00:19:14.448 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.2%, 16=14.4%, 32=28.8%, >=64=43.2% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:14.448 issued rwts: total=111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503301: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=3, BW=3641KiB/s (3728kB/s)(36.0MiB/10125msec) 00:19:14.448 slat (usec): min=888, max=2063.7k, avg=277889.89, stdev=667437.77 00:19:14.448 clat (msec): min=120, max=10110, avg=5215.84, stdev=3898.12 00:19:14.448 lat (msec): min=129, max=10124, avg=5493.73, stdev=3881.03 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 121], 5.00th=[ 130], 10.00th=[ 144], 20.00th=[ 165], 00:19:14.448 | 30.00th=[ 2198], 40.00th=[ 4396], 50.00th=[ 4463], 60.00th=[ 6611], 00:19:14.448 | 70.00th=[ 8658], 80.00th=[ 8792], 90.00th=[10000], 95.00th=[10134], 00:19:14.448 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.448 | 99.99th=[10134] 00:19:14.448 lat (msec) : 250=27.78%, >=2000=72.22% 00:19:14.448 cpu : usr=0.00%, sys=0.33%, ctx=82, majf=0, minf=9217 00:19:14.448 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.448 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.448 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.448 job4: (groupid=0, jobs=1): err= 0: pid=503302: Fri Apr 26 16:30:21 2024 00:19:14.448 read: IOPS=39, BW=39.2MiB/s (41.1MB/s)(473MiB/12070msec) 00:19:14.448 slat (usec): min=46, max=2068.2k, avg=21137.87, stdev=168319.46 00:19:14.448 clat (msec): min=486, max=11962, avg=2803.31, stdev=3488.84 00:19:14.448 lat (msec): min=524, max=12069, avg=2824.45, stdev=3506.23 00:19:14.448 clat percentiles (msec): 00:19:14.448 | 1.00th=[ 531], 5.00th=[ 535], 10.00th=[ 535], 20.00th=[ 542], 00:19:14.448 | 30.00th=[ 542], 40.00th=[ 542], 50.00th=[ 542], 60.00th=[ 592], 00:19:14.448 | 70.00th=[ 2836], 80.00th=[ 8356], 90.00th=[ 8926], 95.00th=[ 9060], 00:19:14.448 | 99.00th=[ 9194], 99.50th=[10671], 99.90th=[12013], 99.95th=[12013], 00:19:14.448 | 99.99th=[12013] 00:19:14.448 bw ( KiB/s): min= 8178, max=249856, per=2.90%, avg=88315.12, stdev=95088.96, samples=8 00:19:14.448 iops : min= 7, max= 244, avg=86.00, stdev=93.09, samples=8 00:19:14.448 lat (msec) : 500=0.21%, 750=60.89%, 1000=5.71%, 2000=1.69%, >=2000=31.50% 00:19:14.448 cpu : usr=0.02%, sys=1.09%, ctx=428, majf=0, minf=32769 00:19:14.448 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:19:14.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.448 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:14.449 issued rwts: total=473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job4: (groupid=0, jobs=1): err= 0: pid=503303: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=91, BW=91.3MiB/s (95.8MB/s)(1114MiB/12197msec) 00:19:14.449 slat (usec): min=42, max=2053.0k, avg=9018.95, stdev=112291.57 00:19:14.449 clat (msec): min=130, max=6145, avg=912.10, stdev=1606.01 00:19:14.449 lat (msec): min=131, max=6147, avg=921.12, stdev=1616.07 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 132], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 138], 00:19:14.449 | 30.00th=[ 264], 40.00th=[ 275], 50.00th=[ 279], 60.00th=[ 279], 00:19:14.449 | 70.00th=[ 284], 80.00th=[ 292], 90.00th=[ 4329], 95.00th=[ 4396], 00:19:14.449 | 99.00th=[ 6141], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:19:14.449 | 99.99th=[ 6141] 00:19:14.449 bw ( KiB/s): min= 1939, max=729088, per=11.04%, avg=336762.67, stdev=286491.64, samples=6 00:19:14.449 iops : min= 1, max= 712, avg=328.67, stdev=279.98, samples=6 00:19:14.449 lat (msec) : 250=28.90%, 500=55.57%, >=2000=15.53% 00:19:14.449 cpu : usr=0.08%, sys=1.42%, ctx=1080, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=1114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job4: (groupid=0, jobs=1): err= 0: pid=503304: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=3, BW=3421KiB/s (3503kB/s)(34.0MiB/10178msec) 00:19:14.449 slat (usec): min=922, max=2144.3k, avg=295094.49, stdev=691323.22 00:19:14.449 clat (msec): min=143, max=10175, avg=7852.56, stdev=2712.57 00:19:14.449 lat (msec): min=2169, max=10177, avg=8147.65, stdev=2373.05 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 144], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 4396], 00:19:14.449 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10000], 00:19:14.449 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10134], 95.00th=[10134], 00:19:14.449 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:19:14.449 | 99.99th=[10134] 00:19:14.449 lat (msec) : 250=2.94%, >=2000=97.06% 00:19:14.449 cpu : usr=0.00%, sys=0.32%, ctx=75, majf=0, minf=8705 00:19:14.449 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:14.449 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job4: (groupid=0, jobs=1): err= 0: pid=503305: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=110, BW=111MiB/s (116MB/s)(1127MiB/10194msec) 00:19:14.449 slat (usec): min=41, max=2131.6k, avg=8965.54, stdev=93052.31 00:19:14.449 clat (msec): min=83, max=6523, avg=1117.79, stdev=1096.55 00:19:14.449 lat (msec): min=358, max=6588, avg=1126.76, stdev=1105.34 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 359], 5.00th=[ 359], 10.00th=[ 363], 20.00th=[ 405], 00:19:14.449 | 30.00th=[ 477], 40.00th=[ 584], 50.00th=[ 659], 60.00th=[ 701], 00:19:14.449 | 70.00th=[ 751], 80.00th=[ 2601], 90.00th=[ 2702], 95.00th=[ 3440], 00:19:14.449 | 99.00th=[ 4329], 99.50th=[ 6477], 99.90th=[ 6544], 99.95th=[ 6544], 00:19:14.449 | 99.99th=[ 6544] 00:19:14.449 bw ( KiB/s): min= 2048, max=305152, per=5.59%, avg=170445.92, stdev=102431.48, samples=12 00:19:14.449 iops : min= 2, max= 298, avg=166.42, stdev=99.98, samples=12 00:19:14.449 lat (msec) : 100=0.09%, 500=32.12%, 750=38.33%, 1000=5.94%, 2000=1.51% 00:19:14.449 lat (msec) : >=2000=22.01% 00:19:14.449 cpu : usr=0.02%, sys=2.38%, ctx=929, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=1127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job4: (groupid=0, jobs=1): err= 0: pid=503306: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=21, BW=21.4MiB/s (22.5MB/s)(218MiB/10165msec) 00:19:14.449 slat (usec): min=41, max=2048.4k, avg=46077.49, stdev=276202.48 00:19:14.449 clat (msec): min=118, max=8150, avg=3012.09, stdev=2361.30 00:19:14.449 lat (msec): min=307, max=8153, avg=3058.17, stdev=2372.56 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 309], 5.00th=[ 351], 10.00th=[ 422], 20.00th=[ 1871], 00:19:14.449 | 30.00th=[ 1921], 40.00th=[ 1955], 50.00th=[ 2005], 60.00th=[ 2072], 00:19:14.449 | 70.00th=[ 2467], 80.00th=[ 4665], 90.00th=[ 8154], 95.00th=[ 8154], 00:19:14.449 | 99.00th=[ 8154], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:19:14.449 | 99.99th=[ 8154] 00:19:14.449 bw ( KiB/s): min=184320, max=184320, per=6.04%, avg=184320.00, stdev= 0.00, samples=1 00:19:14.449 iops : min= 180, max= 180, avg=180.00, stdev= 0.00, samples=1 00:19:14.449 lat (msec) : 250=0.46%, 500=10.55%, 2000=37.61%, >=2000=51.38% 00:19:14.449 cpu : usr=0.00%, sys=1.30%, ctx=185, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.3%, 32=14.7%, >=64=71.1% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:19:14.449 issued rwts: total=218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503307: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=142, BW=142MiB/s (149MB/s)(1726MiB/12134msec) 00:19:14.449 slat (usec): min=43, max=2112.5k, avg=5790.99, stdev=77917.27 00:19:14.449 clat (msec): min=261, max=4546, avg=787.68, stdev=1167.73 00:19:14.449 lat (msec): min=262, max=4549, avg=793.47, stdev=1171.27 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 264], 5.00th=[ 266], 10.00th=[ 268], 20.00th=[ 271], 00:19:14.449 | 30.00th=[ 275], 40.00th=[ 279], 50.00th=[ 284], 60.00th=[ 296], 00:19:14.449 | 70.00th=[ 397], 80.00th=[ 430], 90.00th=[ 2601], 95.00th=[ 4329], 00:19:14.449 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:19:14.449 | 99.99th=[ 4530] 00:19:14.449 bw ( KiB/s): min=16545, max=485376, per=9.75%, avg=297489.00, stdev=168051.48, samples=11 00:19:14.449 iops : min= 16, max= 474, avg=290.36, stdev=164.21, samples=11 00:19:14.449 lat (msec) : 500=83.02%, 750=0.87%, 2000=0.87%, >=2000=15.24% 00:19:14.449 cpu : usr=0.03%, sys=2.24%, ctx=1492, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=1726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503308: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=105, BW=105MiB/s (110MB/s)(1061MiB/10088msec) 00:19:14.449 slat (usec): min=38, max=2073.5k, avg=9438.52, stdev=109020.32 00:19:14.449 clat (msec): min=69, max=7944, avg=1178.74, stdev=1852.39 00:19:14.449 lat (msec): min=98, max=7957, avg=1188.17, stdev=1859.96 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 117], 5.00th=[ 288], 10.00th=[ 351], 20.00th=[ 384], 00:19:14.449 | 30.00th=[ 397], 40.00th=[ 477], 50.00th=[ 481], 60.00th=[ 489], 00:19:14.449 | 70.00th=[ 506], 80.00th=[ 523], 90.00th=[ 4597], 95.00th=[ 6611], 00:19:14.449 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 7953], 00:19:14.449 | 99.99th=[ 7953] 00:19:14.449 bw ( KiB/s): min=10240, max=356352, per=5.70%, avg=173754.27, stdev=139621.67, samples=11 00:19:14.449 iops : min= 10, max= 348, avg=169.64, stdev=136.32, samples=11 00:19:14.449 lat (msec) : 100=0.19%, 250=4.24%, 500=64.66%, 750=15.08%, >=2000=15.83% 00:19:14.449 cpu : usr=0.02%, sys=1.60%, ctx=899, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.1% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=1061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503309: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=160, BW=161MiB/s (168MB/s)(1620MiB/10090msec) 00:19:14.449 slat (usec): min=423, max=1942.8k, avg=6180.80, stdev=59079.19 00:19:14.449 clat (msec): min=68, max=3216, avg=495.94, stdev=410.80 00:19:14.449 lat (msec): min=91, max=4556, avg=502.12, stdev=425.09 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 188], 5.00th=[ 218], 10.00th=[ 220], 20.00th=[ 224], 00:19:14.449 | 30.00th=[ 228], 40.00th=[ 230], 50.00th=[ 232], 60.00th=[ 330], 00:19:14.449 | 70.00th=[ 718], 80.00th=[ 785], 90.00th=[ 1133], 95.00th=[ 1250], 00:19:14.449 | 99.00th=[ 1368], 99.50th=[ 3138], 99.90th=[ 3205], 99.95th=[ 3205], 00:19:14.449 | 99.99th=[ 3205] 00:19:14.449 bw ( KiB/s): min=98304, max=585728, per=9.09%, avg=277127.09, stdev=194826.70, samples=11 00:19:14.449 iops : min= 96, max= 572, avg=270.55, stdev=190.29, samples=11 00:19:14.449 lat (msec) : 100=0.12%, 250=54.51%, 500=8.40%, 750=10.56%, 1000=14.75% 00:19:14.449 lat (msec) : 2000=10.99%, >=2000=0.68% 00:19:14.449 cpu : usr=0.08%, sys=2.09%, ctx=2926, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=1620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503310: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=87, BW=87.1MiB/s (91.3MB/s)(884MiB/10153msec) 00:19:14.449 slat (usec): min=45, max=2148.1k, avg=11364.00, stdev=129584.46 00:19:14.449 clat (msec): min=104, max=6037, avg=890.49, stdev=1313.44 00:19:14.449 lat (msec): min=257, max=6038, avg=901.86, stdev=1325.06 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 255], 5.00th=[ 257], 10.00th=[ 257], 20.00th=[ 257], 00:19:14.449 | 30.00th=[ 259], 40.00th=[ 262], 50.00th=[ 284], 60.00th=[ 355], 00:19:14.449 | 70.00th=[ 447], 80.00th=[ 2299], 90.00th=[ 2467], 95.00th=[ 2534], 00:19:14.449 | 99.00th=[ 6007], 99.50th=[ 6007], 99.90th=[ 6007], 99.95th=[ 6007], 00:19:14.449 | 99.99th=[ 6007] 00:19:14.449 bw ( KiB/s): min=339968, max=505856, per=12.69%, avg=387072.00, stdev=79723.07, samples=4 00:19:14.449 iops : min= 332, max= 494, avg=378.00, stdev=77.85, samples=4 00:19:14.449 lat (msec) : 250=0.11%, 500=75.23%, 750=4.41%, >=2000=20.25% 00:19:14.449 cpu : usr=0.02%, sys=1.62%, ctx=1525, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503311: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=52, BW=52.5MiB/s (55.1MB/s)(530MiB/10088msec) 00:19:14.449 slat (usec): min=106, max=1976.8k, avg=18938.18, stdev=130998.98 00:19:14.449 clat (msec): min=45, max=4473, avg=1719.40, stdev=953.11 00:19:14.449 lat (msec): min=136, max=4524, avg=1738.34, stdev=957.66 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 159], 5.00th=[ 860], 10.00th=[ 894], 20.00th=[ 969], 00:19:14.449 | 30.00th=[ 1099], 40.00th=[ 1284], 50.00th=[ 1401], 60.00th=[ 1452], 00:19:14.449 | 70.00th=[ 2198], 80.00th=[ 2735], 90.00th=[ 3037], 95.00th=[ 3272], 00:19:14.449 | 99.00th=[ 4463], 99.50th=[ 4463], 99.90th=[ 4463], 99.95th=[ 4463], 00:19:14.449 | 99.99th=[ 4463] 00:19:14.449 bw ( KiB/s): min=30720, max=155648, per=3.25%, avg=99050.00, stdev=36967.92, samples=8 00:19:14.449 iops : min= 30, max= 152, avg=96.62, stdev=36.14, samples=8 00:19:14.449 lat (msec) : 50=0.19%, 250=2.83%, 1000=20.38%, 2000=45.09%, >=2000=31.51% 00:19:14.449 cpu : usr=0.03%, sys=1.50%, ctx=1039, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:14.449 issued rwts: total=530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503312: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=426, BW=427MiB/s (447MB/s)(4274MiB/10015msec) 00:19:14.449 slat (usec): min=39, max=2012.0k, avg=2336.71, stdev=36303.63 00:19:14.449 clat (msec): min=14, max=2728, avg=259.99, stdev=436.96 00:19:14.449 lat (msec): min=16, max=2730, avg=262.33, stdev=439.23 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 90], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 127], 00:19:14.449 | 30.00th=[ 128], 40.00th=[ 129], 50.00th=[ 134], 60.00th=[ 184], 00:19:14.449 | 70.00th=[ 232], 80.00th=[ 241], 90.00th=[ 275], 95.00th=[ 372], 00:19:14.449 | 99.00th=[ 2702], 99.50th=[ 2702], 99.90th=[ 2735], 99.95th=[ 2735], 00:19:14.449 | 99.99th=[ 2735] 00:19:14.449 bw ( KiB/s): min=18432, max=1081344, per=19.89%, avg=606646.86, stdev=310609.31, samples=14 00:19:14.449 iops : min= 18, max= 1056, avg=592.43, stdev=303.33, samples=14 00:19:14.449 lat (msec) : 20=0.09%, 50=0.37%, 100=0.66%, 250=82.76%, 500=12.19% 00:19:14.449 lat (msec) : 750=0.21%, 2000=0.75%, >=2000=2.97% 00:19:14.449 cpu : usr=0.11%, sys=3.91%, ctx=4993, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=4274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503313: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=41, BW=41.8MiB/s (43.8MB/s)(508MiB/12158msec) 00:19:14.449 slat (usec): min=494, max=2115.8k, avg=19689.16, stdev=131777.23 00:19:14.449 clat (msec): min=1064, max=8066, avg=2918.19, stdev=1072.20 00:19:14.449 lat (msec): min=1066, max=8083, avg=2937.88, stdev=1088.63 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 1083], 5.00th=[ 1116], 10.00th=[ 1133], 20.00th=[ 1234], 00:19:14.449 | 30.00th=[ 2567], 40.00th=[ 3306], 50.00th=[ 3507], 60.00th=[ 3574], 00:19:14.449 | 70.00th=[ 3608], 80.00th=[ 3675], 90.00th=[ 3809], 95.00th=[ 3910], 00:19:14.449 | 99.00th=[ 3977], 99.50th=[ 5873], 99.90th=[ 8087], 99.95th=[ 8087], 00:19:14.449 | 99.99th=[ 8087] 00:19:14.449 bw ( KiB/s): min= 6144, max=116969, per=2.13%, avg=65065.25, stdev=35974.63, samples=12 00:19:14.449 iops : min= 6, max= 114, avg=63.50, stdev=35.10, samples=12 00:19:14.449 lat (msec) : 2000=25.20%, >=2000=74.80% 00:19:14.449 cpu : usr=0.02%, sys=1.27%, ctx=1353, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:14.449 issued rwts: total=508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503314: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=94, BW=94.6MiB/s (99.2MB/s)(954MiB/10085msec) 00:19:14.449 slat (usec): min=109, max=2129.9k, avg=10483.60, stdev=98939.34 00:19:14.449 clat (msec): min=78, max=3806, avg=1151.21, stdev=1143.58 00:19:14.449 lat (msec): min=85, max=3827, avg=1161.70, stdev=1148.87 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 112], 5.00th=[ 257], 10.00th=[ 257], 20.00th=[ 259], 00:19:14.449 | 30.00th=[ 259], 40.00th=[ 262], 50.00th=[ 405], 60.00th=[ 642], 00:19:14.449 | 70.00th=[ 1989], 80.00th=[ 2366], 90.00th=[ 3373], 95.00th=[ 3440], 00:19:14.449 | 99.00th=[ 3641], 99.50th=[ 3708], 99.90th=[ 3809], 99.95th=[ 3809], 00:19:14.449 | 99.99th=[ 3809] 00:19:14.449 bw ( KiB/s): min=20480, max=501760, per=6.17%, avg=188166.00, stdev=173316.60, samples=9 00:19:14.449 iops : min= 20, max= 490, avg=183.56, stdev=169.43, samples=9 00:19:14.449 lat (msec) : 100=0.52%, 250=0.52%, 500=53.67%, 750=6.60%, 1000=0.73% 00:19:14.449 lat (msec) : 2000=8.39%, >=2000=29.56% 00:19:14.449 cpu : usr=0.02%, sys=1.80%, ctx=1896, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.4%, >=64=93.4% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503315: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=104, BW=105MiB/s (110MB/s)(1282MiB/12211msec) 00:19:14.449 slat (usec): min=38, max=2039.4k, avg=7882.18, stdev=77923.75 00:19:14.449 clat (msec): min=219, max=4249, avg=1100.21, stdev=1095.89 00:19:14.449 lat (msec): min=220, max=4249, avg=1108.09, stdev=1098.41 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 222], 5.00th=[ 239], 10.00th=[ 275], 20.00th=[ 397], 00:19:14.449 | 30.00th=[ 518], 40.00th=[ 584], 50.00th=[ 625], 60.00th=[ 667], 00:19:14.449 | 70.00th=[ 676], 80.00th=[ 2072], 90.00th=[ 2802], 95.00th=[ 3608], 00:19:14.449 | 99.00th=[ 3708], 99.50th=[ 4111], 99.90th=[ 4245], 99.95th=[ 4245], 00:19:14.449 | 99.99th=[ 4245] 00:19:14.449 bw ( KiB/s): min= 1882, max=382976, per=6.46%, avg=197106.17, stdev=109771.30, samples=12 00:19:14.449 iops : min= 1, max= 374, avg=192.42, stdev=107.33, samples=12 00:19:14.449 lat (msec) : 250=7.80%, 500=21.76%, 750=45.87%, 1000=0.78%, 2000=0.16% 00:19:14.449 lat (msec) : >=2000=23.63% 00:19:14.449 cpu : usr=0.04%, sys=1.66%, ctx=1119, majf=0, minf=32769 00:19:14.449 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:19:14.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.449 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.449 issued rwts: total=1282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.449 job5: (groupid=0, jobs=1): err= 0: pid=503316: Fri Apr 26 16:30:21 2024 00:19:14.449 read: IOPS=57, BW=57.0MiB/s (59.8MB/s)(696MiB/12207msec) 00:19:14.449 slat (usec): min=72, max=2066.2k, avg=14436.55, stdev=132118.96 00:19:14.449 clat (msec): min=403, max=6944, avg=2169.86, stdev=2263.57 00:19:14.449 lat (msec): min=405, max=6946, avg=2184.30, stdev=2269.00 00:19:14.449 clat percentiles (msec): 00:19:14.449 | 1.00th=[ 405], 5.00th=[ 409], 10.00th=[ 414], 20.00th=[ 418], 00:19:14.449 | 30.00th=[ 493], 40.00th=[ 701], 50.00th=[ 936], 60.00th=[ 1217], 00:19:14.449 | 70.00th=[ 2903], 80.00th=[ 3373], 90.00th=[ 6544], 95.00th=[ 6745], 00:19:14.450 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:19:14.450 | 99.99th=[ 6946] 00:19:14.450 bw ( KiB/s): min= 1882, max=274432, per=3.82%, avg=116514.60, stdev=98908.88, samples=10 00:19:14.450 iops : min= 1, max= 268, avg=113.70, stdev=96.70, samples=10 00:19:14.450 lat (msec) : 500=30.32%, 750=13.22%, 1000=7.76%, 2000=11.49%, >=2000=37.21% 00:19:14.450 cpu : usr=0.02%, sys=1.29%, ctx=792, majf=0, minf=32272 00:19:14.450 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.6%, >=64=90.9% 00:19:14.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.450 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:14.450 issued rwts: total=696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.450 job5: (groupid=0, jobs=1): err= 0: pid=503317: Fri Apr 26 16:30:21 2024 00:19:14.450 read: IOPS=73, BW=73.9MiB/s (77.5MB/s)(897MiB/12136msec) 00:19:14.450 slat (usec): min=502, max=2115.9k, avg=11155.43, stdev=81836.84 00:19:14.450 clat (msec): min=772, max=3167, avg=1514.93, stdev=874.20 00:19:14.450 lat (msec): min=778, max=3173, avg=1526.09, stdev=875.35 00:19:14.450 clat percentiles (msec): 00:19:14.450 | 1.00th=[ 776], 5.00th=[ 785], 10.00th=[ 802], 20.00th=[ 835], 00:19:14.450 | 30.00th=[ 894], 40.00th=[ 936], 50.00th=[ 1020], 60.00th=[ 1150], 00:19:14.450 | 70.00th=[ 2165], 80.00th=[ 2567], 90.00th=[ 3104], 95.00th=[ 3138], 00:19:14.450 | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3171], 99.95th=[ 3171], 00:19:14.450 | 99.99th=[ 3171] 00:19:14.450 bw ( KiB/s): min= 1756, max=165888, per=3.69%, avg=112610.36, stdev=52792.90, samples=14 00:19:14.450 iops : min= 1, max= 162, avg=109.86, stdev=51.74, samples=14 00:19:14.450 lat (msec) : 1000=47.94%, 2000=20.74%, >=2000=31.33% 00:19:14.450 cpu : usr=0.07%, sys=1.24%, ctx=1638, majf=0, minf=32769 00:19:14.450 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:19:14.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.450 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.450 issued rwts: total=897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.450 job5: (groupid=0, jobs=1): err= 0: pid=503318: Fri Apr 26 16:30:21 2024 00:19:14.450 read: IOPS=125, BW=126MiB/s (132MB/s)(1280MiB/10166msec) 00:19:14.450 slat (usec): min=66, max=2126.2k, avg=7826.44, stdev=101287.13 00:19:14.450 clat (msec): min=140, max=4904, avg=979.07, stdev=1370.64 00:19:14.450 lat (msec): min=174, max=4920, avg=986.89, stdev=1374.86 00:19:14.450 clat percentiles (msec): 00:19:14.450 | 1.00th=[ 182], 5.00th=[ 205], 10.00th=[ 262], 20.00th=[ 284], 00:19:14.450 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 355], 60.00th=[ 405], 00:19:14.450 | 70.00th=[ 430], 80.00th=[ 2299], 90.00th=[ 2534], 95.00th=[ 4732], 00:19:14.450 | 99.00th=[ 4866], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:19:14.450 | 99.99th=[ 4933] 00:19:14.450 bw ( KiB/s): min=24576, max=512000, per=9.67%, avg=294912.00, stdev=172466.82, samples=8 00:19:14.450 iops : min= 24, max= 500, avg=288.00, stdev=168.42, samples=8 00:19:14.450 lat (msec) : 250=8.20%, 500=66.02%, 750=5.00%, >=2000=20.78% 00:19:14.450 cpu : usr=0.06%, sys=1.84%, ctx=1215, majf=0, minf=32769 00:19:14.450 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:19:14.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.450 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:14.450 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.450 job5: (groupid=0, jobs=1): err= 0: pid=503319: Fri Apr 26 16:30:21 2024 00:19:14.450 read: IOPS=34, BW=34.6MiB/s (36.3MB/s)(419MiB/12119msec) 00:19:14.450 slat (usec): min=92, max=2131.0k, avg=28742.72, stdev=157425.63 00:19:14.450 clat (msec): min=72, max=7214, avg=3066.29, stdev=1864.00 00:19:14.450 lat (msec): min=1389, max=7223, avg=3095.03, stdev=1864.98 00:19:14.450 clat percentiles (msec): 00:19:14.450 | 1.00th=[ 1401], 5.00th=[ 1401], 10.00th=[ 1485], 20.00th=[ 1519], 00:19:14.450 | 30.00th=[ 1569], 40.00th=[ 1703], 50.00th=[ 1787], 60.00th=[ 3104], 00:19:14.450 | 70.00th=[ 4111], 80.00th=[ 4799], 90.00th=[ 6611], 95.00th=[ 6879], 00:19:14.450 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7215], 99.95th=[ 7215], 00:19:14.450 | 99.99th=[ 7215] 00:19:14.450 bw ( KiB/s): min=14336, max=88064, per=1.63%, avg=49655.42, stdev=25350.52, samples=12 00:19:14.450 iops : min= 14, max= 86, avg=48.33, stdev=24.92, samples=12 00:19:14.450 lat (msec) : 100=0.24%, 2000=50.36%, >=2000=49.40% 00:19:14.450 cpu : usr=0.02%, sys=0.90%, ctx=1622, majf=0, minf=32769 00:19:14.450 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:19:14.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.450 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:14.450 issued rwts: total=419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.450 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:14.450 00:19:14.450 Run status group 0 (all jobs): 00:19:14.450 READ: bw=2979MiB/s (3123MB/s), 675KiB/s-427MiB/s (691kB/s-447MB/s), io=35.8GiB (38.4GB), run=10015-12297msec 00:19:14.450 00:19:14.450 Disk stats (read/write): 00:19:14.450 nvme0n1: ios=6114/0, merge=0/0, ticks=7223246/0, in_queue=7223246, util=98.43% 00:19:14.450 nvme1n1: ios=5507/0, merge=0/0, ticks=8952403/0, in_queue=8952403, util=98.72% 00:19:14.450 nvme2n1: ios=29115/0, merge=0/0, ticks=6605430/0, in_queue=6605430, util=98.86% 00:19:14.450 nvme3n1: ios=59664/0, merge=0/0, ticks=10011824/0, in_queue=10011824, util=98.84% 00:19:14.450 nvme4n1: ios=61993/0, merge=0/0, ticks=10907528/0, in_queue=10907528, util=99.19% 00:19:14.450 nvme5n1: ios=128818/0, merge=0/0, ticks=11036942/0, in_queue=11036942, util=99.33% 00:19:14.450 16:30:21 -- target/srq_overwhelm.sh@38 -- # sync 00:19:14.450 16:30:21 -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:19:14.450 16:30:21 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:14.450 16:30:21 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:19:15.826 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.826 16:30:24 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:19:15.826 16:30:24 -- common/autotest_common.sh@1205 -- # local i=0 00:19:15.826 16:30:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:15.826 16:30:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000000 00:19:15.826 16:30:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000000 00:19:15.826 16:30:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:15.826 16:30:24 -- common/autotest_common.sh@1217 -- # return 0 00:19:15.826 16:30:24 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:15.826 16:30:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:15.826 16:30:24 -- common/autotest_common.sh@10 -- # set +x 00:19:15.826 16:30:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:15.826 16:30:24 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:15.826 16:30:24 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:19.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:19.113 16:30:27 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:19:19.113 16:30:27 -- common/autotest_common.sh@1205 -- # local i=0 00:19:19.113 16:30:27 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:19.113 16:30:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000001 00:19:19.113 16:30:27 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:19.113 16:30:27 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000001 00:19:19.113 16:30:27 -- common/autotest_common.sh@1217 -- # return 0 00:19:19.113 16:30:27 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:19.113 16:30:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:19.113 16:30:27 -- common/autotest_common.sh@10 -- # set +x 00:19:19.113 16:30:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:19.113 16:30:27 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:19.113 16:30:27 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:22.400 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:22.400 16:30:31 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:19:22.400 16:30:31 -- common/autotest_common.sh@1205 -- # local i=0 00:19:22.400 16:30:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:22.400 16:30:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000002 00:19:22.400 16:30:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:22.400 16:30:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000002 00:19:22.400 16:30:31 -- common/autotest_common.sh@1217 -- # return 0 00:19:22.400 16:30:31 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:22.401 16:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.401 16:30:31 -- common/autotest_common.sh@10 -- # set +x 00:19:22.401 16:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.401 16:30:31 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:22.401 16:30:31 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:25.688 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:25.688 16:30:34 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:19:25.688 16:30:34 -- common/autotest_common.sh@1205 -- # local i=0 00:19:25.688 16:30:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:25.688 16:30:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000003 00:19:25.688 16:30:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:25.688 16:30:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000003 00:19:25.688 16:30:34 -- common/autotest_common.sh@1217 -- # return 0 00:19:25.688 16:30:34 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:25.688 16:30:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.688 16:30:34 -- common/autotest_common.sh@10 -- # set +x 00:19:25.689 16:30:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.689 16:30:34 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:25.689 16:30:34 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:28.969 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:28.969 16:30:37 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:19:28.969 16:30:37 -- common/autotest_common.sh@1205 -- # local i=0 00:19:28.969 16:30:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:28.969 16:30:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000004 00:19:28.969 16:30:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:28.969 16:30:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000004 00:19:28.969 16:30:37 -- common/autotest_common.sh@1217 -- # return 0 00:19:28.970 16:30:37 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:28.970 16:30:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.970 16:30:37 -- common/autotest_common.sh@10 -- # set +x 00:19:28.970 16:30:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.970 16:30:37 -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:28.970 16:30:37 -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:32.252 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:32.252 16:30:40 -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:19:32.252 16:30:40 -- common/autotest_common.sh@1205 -- # local i=0 00:19:32.252 16:30:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:32.252 16:30:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK00000000000005 00:19:32.252 16:30:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:32.252 16:30:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK00000000000005 00:19:32.252 16:30:40 -- common/autotest_common.sh@1217 -- # return 0 00:19:32.252 16:30:40 -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:32.252 16:30:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.252 16:30:40 -- common/autotest_common.sh@10 -- # set +x 00:19:32.252 16:30:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.252 16:30:40 -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:32.252 16:30:40 -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:19:32.252 16:30:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:32.252 16:30:40 -- nvmf/common.sh@117 -- # sync 00:19:32.252 16:30:41 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:32.252 16:30:41 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:32.252 16:30:41 -- nvmf/common.sh@120 -- # set +e 00:19:32.252 16:30:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:32.252 16:30:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:32.252 rmmod nvme_rdma 00:19:32.252 rmmod nvme_fabrics 00:19:32.252 16:30:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:32.252 16:30:41 -- nvmf/common.sh@124 -- # set -e 00:19:32.252 16:30:41 -- nvmf/common.sh@125 -- # return 0 00:19:32.252 16:30:41 -- nvmf/common.sh@478 -- # '[' -n 501014 ']' 00:19:32.252 16:30:41 -- nvmf/common.sh@479 -- # killprocess 501014 00:19:32.252 16:30:41 -- common/autotest_common.sh@936 -- # '[' -z 501014 ']' 00:19:32.252 16:30:41 -- common/autotest_common.sh@940 -- # kill -0 501014 00:19:32.252 16:30:41 -- common/autotest_common.sh@941 -- # uname 00:19:32.252 16:30:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:32.252 16:30:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 501014 00:19:32.252 16:30:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:32.252 16:30:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:32.252 16:30:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 501014' 00:19:32.252 killing process with pid 501014 00:19:32.252 16:30:41 -- common/autotest_common.sh@955 -- # kill 501014 00:19:32.252 16:30:41 -- common/autotest_common.sh@960 -- # wait 501014 00:19:32.252 [2024-04-26 16:30:41.165595] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:32.512 16:30:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:32.512 16:30:41 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:32.512 00:19:32.512 real 0m49.951s 00:19:32.512 user 2m59.194s 00:19:32.512 sys 0m15.324s 00:19:32.512 16:30:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:32.512 16:30:41 -- common/autotest_common.sh@10 -- # set +x 00:19:32.512 ************************************ 00:19:32.512 END TEST nvmf_srq_overwhelm 00:19:32.512 ************************************ 00:19:32.772 16:30:41 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:32.772 16:30:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:32.772 16:30:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:32.772 16:30:41 -- common/autotest_common.sh@10 -- # set +x 00:19:32.772 ************************************ 00:19:32.772 START TEST nvmf_shutdown 00:19:32.772 ************************************ 00:19:32.772 16:30:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:33.032 * Looking for test storage... 00:19:33.032 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:33.032 16:30:41 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.032 16:30:41 -- nvmf/common.sh@7 -- # uname -s 00:19:33.032 16:30:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.032 16:30:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.032 16:30:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.032 16:30:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.032 16:30:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.032 16:30:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.032 16:30:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.032 16:30:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.032 16:30:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.032 16:30:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.032 16:30:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:19:33.032 16:30:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:19:33.032 16:30:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.032 16:30:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.032 16:30:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.032 16:30:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.032 16:30:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:33.032 16:30:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.032 16:30:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.032 16:30:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.032 16:30:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.032 16:30:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.033 16:30:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.033 16:30:41 -- paths/export.sh@5 -- # export PATH 00:19:33.033 16:30:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.033 16:30:41 -- nvmf/common.sh@47 -- # : 0 00:19:33.033 16:30:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:33.033 16:30:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:33.033 16:30:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.033 16:30:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.033 16:30:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.033 16:30:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:33.033 16:30:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:33.033 16:30:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:33.033 16:30:41 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.033 16:30:41 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.033 16:30:41 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:33.033 16:30:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:33.033 16:30:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:33.033 16:30:41 -- common/autotest_common.sh@10 -- # set +x 00:19:33.033 ************************************ 00:19:33.033 START TEST nvmf_shutdown_tc1 00:19:33.033 ************************************ 00:19:33.033 16:30:41 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:19:33.033 16:30:41 -- target/shutdown.sh@74 -- # starttarget 00:19:33.033 16:30:41 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:33.033 16:30:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:33.033 16:30:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.033 16:30:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:33.033 16:30:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:33.033 16:30:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:33.033 16:30:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.033 16:30:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.033 16:30:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.033 16:30:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:33.033 16:30:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:33.033 16:30:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.033 16:30:42 -- common/autotest_common.sh@10 -- # set +x 00:19:39.600 16:30:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:39.600 16:30:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:39.600 16:30:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:39.600 16:30:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:39.600 16:30:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:39.600 16:30:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:39.600 16:30:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:39.600 16:30:47 -- nvmf/common.sh@295 -- # net_devs=() 00:19:39.600 16:30:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:39.600 16:30:47 -- nvmf/common.sh@296 -- # e810=() 00:19:39.600 16:30:47 -- nvmf/common.sh@296 -- # local -ga e810 00:19:39.600 16:30:47 -- nvmf/common.sh@297 -- # x722=() 00:19:39.600 16:30:47 -- nvmf/common.sh@297 -- # local -ga x722 00:19:39.600 16:30:47 -- nvmf/common.sh@298 -- # mlx=() 00:19:39.600 16:30:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:39.600 16:30:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.600 16:30:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:39.600 16:30:47 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:39.600 16:30:47 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:39.600 16:30:47 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:39.600 16:30:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:39.600 16:30:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.600 16:30:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:19:39.600 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:19:39.600 16:30:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:39.600 16:30:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.600 16:30:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:19:39.600 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:19:39.600 16:30:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:39.600 16:30:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:39.600 16:30:47 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.600 16:30:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.600 16:30:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:39.600 16:30:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.600 16:30:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:39.600 Found net devices under 0000:18:00.0: mlx_0_0 00:19:39.600 16:30:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.600 16:30:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.600 16:30:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.600 16:30:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:39.600 16:30:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.600 16:30:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:39.600 Found net devices under 0000:18:00.1: mlx_0_1 00:19:39.600 16:30:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.600 16:30:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:39.600 16:30:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:39.600 16:30:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:39.600 16:30:47 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:39.600 16:30:47 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:39.600 16:30:47 -- nvmf/common.sh@58 -- # uname 00:19:39.600 16:30:47 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:39.600 16:30:47 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:39.600 16:30:47 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:39.600 16:30:47 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:39.600 16:30:47 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:39.600 16:30:47 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:39.600 16:30:47 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:39.600 16:30:47 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:39.600 16:30:47 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:39.600 16:30:47 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:39.600 16:30:47 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:39.600 16:30:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:39.600 16:30:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:39.600 16:30:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:39.600 16:30:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:39.600 16:30:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:39.601 16:30:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:39.601 16:30:48 -- nvmf/common.sh@105 -- # continue 2 00:19:39.601 16:30:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:39.601 16:30:48 -- nvmf/common.sh@105 -- # continue 2 00:19:39.601 16:30:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:39.601 16:30:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:39.601 16:30:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:39.601 16:30:48 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:39.601 16:30:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:39.601 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:39.601 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:19:39.601 altname enp24s0f0np0 00:19:39.601 altname ens785f0np0 00:19:39.601 inet 192.168.100.8/24 scope global mlx_0_0 00:19:39.601 valid_lft forever preferred_lft forever 00:19:39.601 16:30:48 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:39.601 16:30:48 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:39.601 16:30:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:39.601 16:30:48 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:39.601 16:30:48 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:39.601 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:39.601 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:19:39.601 altname enp24s0f1np1 00:19:39.601 altname ens785f1np1 00:19:39.601 inet 192.168.100.9/24 scope global mlx_0_1 00:19:39.601 valid_lft forever preferred_lft forever 00:19:39.601 16:30:48 -- nvmf/common.sh@411 -- # return 0 00:19:39.601 16:30:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:39.601 16:30:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:39.601 16:30:48 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:39.601 16:30:48 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:39.601 16:30:48 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:39.601 16:30:48 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:39.601 16:30:48 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:39.601 16:30:48 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:39.601 16:30:48 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:39.601 16:30:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:39.601 16:30:48 -- nvmf/common.sh@105 -- # continue 2 00:19:39.601 16:30:48 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:39.601 16:30:48 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:39.601 16:30:48 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:39.601 16:30:48 -- nvmf/common.sh@105 -- # continue 2 00:19:39.601 16:30:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:39.601 16:30:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:39.601 16:30:48 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:39.601 16:30:48 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:39.601 16:30:48 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:39.601 16:30:48 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:39.601 16:30:48 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:39.601 16:30:48 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:39.601 192.168.100.9' 00:19:39.601 16:30:48 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:39.601 192.168.100.9' 00:19:39.601 16:30:48 -- nvmf/common.sh@446 -- # head -n 1 00:19:39.601 16:30:48 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:39.601 16:30:48 -- nvmf/common.sh@447 -- # head -n 1 00:19:39.601 16:30:48 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:39.601 192.168.100.9' 00:19:39.601 16:30:48 -- nvmf/common.sh@447 -- # tail -n +2 00:19:39.601 16:30:48 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:39.601 16:30:48 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:39.601 16:30:48 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:39.601 16:30:48 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:39.601 16:30:48 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:39.601 16:30:48 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:39.601 16:30:48 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:39.601 16:30:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:39.601 16:30:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:39.601 16:30:48 -- common/autotest_common.sh@10 -- # set +x 00:19:39.601 16:30:48 -- nvmf/common.sh@470 -- # nvmfpid=510507 00:19:39.601 16:30:48 -- nvmf/common.sh@471 -- # waitforlisten 510507 00:19:39.601 16:30:48 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:39.601 16:30:48 -- common/autotest_common.sh@817 -- # '[' -z 510507 ']' 00:19:39.601 16:30:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.601 16:30:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:39.601 16:30:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.601 16:30:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:39.601 16:30:48 -- common/autotest_common.sh@10 -- # set +x 00:19:39.601 [2024-04-26 16:30:48.234815] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:19:39.601 [2024-04-26 16:30:48.234870] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.601 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.601 [2024-04-26 16:30:48.308181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.601 [2024-04-26 16:30:48.388340] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.601 [2024-04-26 16:30:48.388391] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.601 [2024-04-26 16:30:48.388400] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.601 [2024-04-26 16:30:48.388424] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.601 [2024-04-26 16:30:48.388432] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.601 [2024-04-26 16:30:48.388536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.601 [2024-04-26 16:30:48.388625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.601 [2024-04-26 16:30:48.388725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.601 [2024-04-26 16:30:48.388726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:40.169 16:30:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:40.169 16:30:49 -- common/autotest_common.sh@850 -- # return 0 00:19:40.169 16:30:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:40.169 16:30:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:40.169 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.169 16:30:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.169 16:30:49 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:40.169 16:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.169 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.169 [2024-04-26 16:30:49.126718] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfcb600/0xfcfaf0) succeed. 00:19:40.169 [2024-04-26 16:30:49.137176] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfccc40/0x1011180) succeed. 00:19:40.428 16:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.428 16:30:49 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:40.428 16:30:49 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:40.428 16:30:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:40.428 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.428 16:30:49 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.428 16:30:49 -- target/shutdown.sh@28 -- # cat 00:19:40.428 16:30:49 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:40.428 16:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:40.428 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.428 Malloc1 00:19:40.428 [2024-04-26 16:30:49.376122] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:40.428 Malloc2 00:19:40.428 Malloc3 00:19:40.686 Malloc4 00:19:40.686 Malloc5 00:19:40.686 Malloc6 00:19:40.686 Malloc7 00:19:40.686 Malloc8 00:19:40.945 Malloc9 00:19:40.945 Malloc10 00:19:40.945 16:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:40.945 16:30:49 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:40.945 16:30:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:40.945 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 16:30:49 -- target/shutdown.sh@78 -- # perfpid=510832 00:19:40.945 16:30:49 -- target/shutdown.sh@79 -- # waitforlisten 510832 /var/tmp/bdevperf.sock 00:19:40.945 16:30:49 -- common/autotest_common.sh@817 -- # '[' -z 510832 ']' 00:19:40.945 16:30:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.945 16:30:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:40.945 16:30:49 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:40.945 16:30:49 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:40.945 16:30:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.945 16:30:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:40.945 16:30:49 -- nvmf/common.sh@521 -- # config=() 00:19:40.945 16:30:49 -- common/autotest_common.sh@10 -- # set +x 00:19:40.945 16:30:49 -- nvmf/common.sh@521 -- # local subsystem config 00:19:40.945 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.945 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.945 { 00:19:40.945 "params": { 00:19:40.945 "name": "Nvme$subsystem", 00:19:40.945 "trtype": "$TEST_TRANSPORT", 00:19:40.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.945 "adrfam": "ipv4", 00:19:40.945 "trsvcid": "$NVMF_PORT", 00:19:40.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.945 "hdgst": ${hdgst:-false}, 00:19:40.945 "ddgst": ${ddgst:-false} 00:19:40.945 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 [2024-04-26 16:30:49.886516] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:19:40.946 [2024-04-26 16:30:49.886573] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:40.946 { 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme$subsystem", 00:19:40.946 "trtype": "$TEST_TRANSPORT", 00:19:40.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "$NVMF_PORT", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.946 "hdgst": ${hdgst:-false}, 00:19:40.946 "ddgst": ${ddgst:-false} 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 } 00:19:40.946 EOF 00:19:40.946 )") 00:19:40.946 16:30:49 -- nvmf/common.sh@543 -- # cat 00:19:40.946 16:30:49 -- nvmf/common.sh@545 -- # jq . 00:19:40.946 16:30:49 -- nvmf/common.sh@546 -- # IFS=, 00:19:40.946 16:30:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme1", 00:19:40.946 "trtype": "rdma", 00:19:40.946 "traddr": "192.168.100.8", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "4420", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.946 "hdgst": false, 00:19:40.946 "ddgst": false 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 },{ 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme2", 00:19:40.946 "trtype": "rdma", 00:19:40.946 "traddr": "192.168.100.8", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "4420", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:40.946 "hdgst": false, 00:19:40.946 "ddgst": false 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 },{ 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme3", 00:19:40.946 "trtype": "rdma", 00:19:40.946 "traddr": "192.168.100.8", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "4420", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:40.946 "hdgst": false, 00:19:40.946 "ddgst": false 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 },{ 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme4", 00:19:40.946 "trtype": "rdma", 00:19:40.946 "traddr": "192.168.100.8", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "4420", 00:19:40.946 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:40.946 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:40.946 "hdgst": false, 00:19:40.946 "ddgst": false 00:19:40.946 }, 00:19:40.946 "method": "bdev_nvme_attach_controller" 00:19:40.946 },{ 00:19:40.946 "params": { 00:19:40.946 "name": "Nvme5", 00:19:40.946 "trtype": "rdma", 00:19:40.946 "traddr": "192.168.100.8", 00:19:40.946 "adrfam": "ipv4", 00:19:40.946 "trsvcid": "4420", 00:19:40.947 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:40.947 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:40.947 "hdgst": false, 00:19:40.947 "ddgst": false 00:19:40.947 }, 00:19:40.947 "method": "bdev_nvme_attach_controller" 00:19:40.947 },{ 00:19:40.947 "params": { 00:19:40.947 "name": "Nvme6", 00:19:40.947 "trtype": "rdma", 00:19:40.947 "traddr": "192.168.100.8", 00:19:40.947 "adrfam": "ipv4", 00:19:40.947 "trsvcid": "4420", 00:19:40.947 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:40.947 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:40.947 "hdgst": false, 00:19:40.947 "ddgst": false 00:19:40.947 }, 00:19:40.947 "method": "bdev_nvme_attach_controller" 00:19:40.947 },{ 00:19:40.947 "params": { 00:19:40.947 "name": "Nvme7", 00:19:40.947 "trtype": "rdma", 00:19:40.947 "traddr": "192.168.100.8", 00:19:40.947 "adrfam": "ipv4", 00:19:40.947 "trsvcid": "4420", 00:19:40.947 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:40.947 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:40.947 "hdgst": false, 00:19:40.947 "ddgst": false 00:19:40.947 }, 00:19:40.947 "method": "bdev_nvme_attach_controller" 00:19:40.947 },{ 00:19:40.947 "params": { 00:19:40.947 "name": "Nvme8", 00:19:40.947 "trtype": "rdma", 00:19:40.947 "traddr": "192.168.100.8", 00:19:40.947 "adrfam": "ipv4", 00:19:40.947 "trsvcid": "4420", 00:19:40.947 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:40.947 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:40.947 "hdgst": false, 00:19:40.947 "ddgst": false 00:19:40.947 }, 00:19:40.947 "method": "bdev_nvme_attach_controller" 00:19:40.947 },{ 00:19:40.947 "params": { 00:19:40.947 "name": "Nvme9", 00:19:40.947 "trtype": "rdma", 00:19:40.947 "traddr": "192.168.100.8", 00:19:40.947 "adrfam": "ipv4", 00:19:40.947 "trsvcid": "4420", 00:19:40.947 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:40.947 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:40.947 "hdgst": false, 00:19:40.947 "ddgst": false 00:19:40.947 }, 00:19:40.947 "method": "bdev_nvme_attach_controller" 00:19:40.947 },{ 00:19:40.947 "params": { 00:19:40.947 "name": "Nvme10", 00:19:40.947 "trtype": "rdma", 00:19:40.947 "traddr": "192.168.100.8", 00:19:40.947 "adrfam": "ipv4", 00:19:40.947 "trsvcid": "4420", 00:19:40.947 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:40.947 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:40.947 "hdgst": false, 00:19:40.947 "ddgst": false 00:19:40.947 }, 00:19:40.947 "method": "bdev_nvme_attach_controller" 00:19:40.947 }' 00:19:40.947 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.206 [2024-04-26 16:30:49.977508] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.206 [2024-04-26 16:30:50.063779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.140 16:30:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:42.140 16:30:50 -- common/autotest_common.sh@850 -- # return 0 00:19:42.140 16:30:50 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:42.140 16:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.140 16:30:50 -- common/autotest_common.sh@10 -- # set +x 00:19:42.140 16:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.140 16:30:50 -- target/shutdown.sh@83 -- # kill -9 510832 00:19:42.140 16:30:50 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:42.140 16:30:50 -- target/shutdown.sh@87 -- # sleep 1 00:19:43.076 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 510832 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:43.076 16:30:51 -- target/shutdown.sh@88 -- # kill -0 510507 00:19:43.076 16:30:51 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:43.076 16:30:51 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:43.076 16:30:51 -- nvmf/common.sh@521 -- # config=() 00:19:43.076 16:30:51 -- nvmf/common.sh@521 -- # local subsystem config 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 [2024-04-26 16:30:51.973632] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:19:43.076 [2024-04-26 16:30:51.973686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511052 ] 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 16:30:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:51 -- nvmf/common.sh@543 -- # cat 00:19:43.076 16:30:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.076 16:30:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.076 { 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme$subsystem", 00:19:43.076 "trtype": "$TEST_TRANSPORT", 00:19:43.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "$NVMF_PORT", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.076 "hdgst": ${hdgst:-false}, 00:19:43.076 "ddgst": ${ddgst:-false} 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 } 00:19:43.076 EOF 00:19:43.076 )") 00:19:43.076 16:30:52 -- nvmf/common.sh@543 -- # cat 00:19:43.076 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.076 16:30:52 -- nvmf/common.sh@545 -- # jq . 00:19:43.076 16:30:52 -- nvmf/common.sh@546 -- # IFS=, 00:19:43.076 16:30:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme1", 00:19:43.076 "trtype": "rdma", 00:19:43.076 "traddr": "192.168.100.8", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "4420", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.076 "hdgst": false, 00:19:43.076 "ddgst": false 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 },{ 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme2", 00:19:43.076 "trtype": "rdma", 00:19:43.076 "traddr": "192.168.100.8", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "4420", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:43.076 "hdgst": false, 00:19:43.076 "ddgst": false 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 },{ 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme3", 00:19:43.076 "trtype": "rdma", 00:19:43.076 "traddr": "192.168.100.8", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "4420", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:43.076 "hdgst": false, 00:19:43.076 "ddgst": false 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 },{ 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme4", 00:19:43.076 "trtype": "rdma", 00:19:43.076 "traddr": "192.168.100.8", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "4420", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:43.076 "hdgst": false, 00:19:43.076 "ddgst": false 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 },{ 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme5", 00:19:43.076 "trtype": "rdma", 00:19:43.076 "traddr": "192.168.100.8", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "4420", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:43.076 "hdgst": false, 00:19:43.076 "ddgst": false 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 },{ 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme6", 00:19:43.076 "trtype": "rdma", 00:19:43.076 "traddr": "192.168.100.8", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "4420", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:43.076 "hdgst": false, 00:19:43.076 "ddgst": false 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 },{ 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme7", 00:19:43.076 "trtype": "rdma", 00:19:43.076 "traddr": "192.168.100.8", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "4420", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:43.076 "hdgst": false, 00:19:43.076 "ddgst": false 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.076 },{ 00:19:43.076 "params": { 00:19:43.076 "name": "Nvme8", 00:19:43.076 "trtype": "rdma", 00:19:43.076 "traddr": "192.168.100.8", 00:19:43.076 "adrfam": "ipv4", 00:19:43.076 "trsvcid": "4420", 00:19:43.076 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:43.076 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:43.076 "hdgst": false, 00:19:43.076 "ddgst": false 00:19:43.076 }, 00:19:43.076 "method": "bdev_nvme_attach_controller" 00:19:43.077 },{ 00:19:43.077 "params": { 00:19:43.077 "name": "Nvme9", 00:19:43.077 "trtype": "rdma", 00:19:43.077 "traddr": "192.168.100.8", 00:19:43.077 "adrfam": "ipv4", 00:19:43.077 "trsvcid": "4420", 00:19:43.077 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:43.077 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:43.077 "hdgst": false, 00:19:43.077 "ddgst": false 00:19:43.077 }, 00:19:43.077 "method": "bdev_nvme_attach_controller" 00:19:43.077 },{ 00:19:43.077 "params": { 00:19:43.077 "name": "Nvme10", 00:19:43.077 "trtype": "rdma", 00:19:43.077 "traddr": "192.168.100.8", 00:19:43.077 "adrfam": "ipv4", 00:19:43.077 "trsvcid": "4420", 00:19:43.077 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:43.077 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:43.077 "hdgst": false, 00:19:43.077 "ddgst": false 00:19:43.077 }, 00:19:43.077 "method": "bdev_nvme_attach_controller" 00:19:43.077 }' 00:19:43.077 [2024-04-26 16:30:52.048232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.336 [2024-04-26 16:30:52.126942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.272 Running I/O for 1 seconds... 00:19:45.206 00:19:45.206 Latency(us) 00:19:45.206 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.206 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme1n1 : 1.16 350.97 21.94 0.00 0.00 177193.59 31457.28 198773.54 00:19:45.206 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme2n1 : 1.16 343.82 21.49 0.00 0.00 177727.36 33508.84 186920.07 00:19:45.206 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme3n1 : 1.17 390.18 24.39 0.00 0.00 157464.24 5670.29 175978.41 00:19:45.206 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme4n1 : 1.17 408.54 25.53 0.00 0.00 148355.10 7693.36 132211.76 00:19:45.206 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme5n1 : 1.18 387.86 24.24 0.00 0.00 153707.66 8662.15 125829.12 00:19:45.206 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme6n1 : 1.18 393.52 24.60 0.00 0.00 149569.78 8491.19 116255.17 00:19:45.206 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme7n1 : 1.18 399.14 24.95 0.00 0.00 145535.09 8491.19 109872.53 00:19:45.206 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme8n1 : 1.18 382.78 23.92 0.00 0.00 149268.49 8263.23 100298.57 00:19:45.206 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme9n1 : 1.17 382.60 23.91 0.00 0.00 148154.86 9346.00 97107.26 00:19:45.206 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:45.206 Verification LBA range: start 0x0 length 0x400 00:19:45.206 Nvme10n1 : 1.17 327.48 20.47 0.00 0.00 170473.15 10314.80 205156.17 00:19:45.206 =================================================================================================================== 00:19:45.206 Total : 3766.90 235.43 0.00 0.00 156997.09 5670.29 205156.17 00:19:45.465 16:30:54 -- target/shutdown.sh@94 -- # stoptarget 00:19:45.465 16:30:54 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:45.465 16:30:54 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:45.724 16:30:54 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:45.724 16:30:54 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:45.724 16:30:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:45.724 16:30:54 -- nvmf/common.sh@117 -- # sync 00:19:45.724 16:30:54 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:45.724 16:30:54 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:45.724 16:30:54 -- nvmf/common.sh@120 -- # set +e 00:19:45.724 16:30:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.724 16:30:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:45.724 rmmod nvme_rdma 00:19:45.724 rmmod nvme_fabrics 00:19:45.724 16:30:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.724 16:30:54 -- nvmf/common.sh@124 -- # set -e 00:19:45.724 16:30:54 -- nvmf/common.sh@125 -- # return 0 00:19:45.724 16:30:54 -- nvmf/common.sh@478 -- # '[' -n 510507 ']' 00:19:45.724 16:30:54 -- nvmf/common.sh@479 -- # killprocess 510507 00:19:45.724 16:30:54 -- common/autotest_common.sh@936 -- # '[' -z 510507 ']' 00:19:45.724 16:30:54 -- common/autotest_common.sh@940 -- # kill -0 510507 00:19:45.724 16:30:54 -- common/autotest_common.sh@941 -- # uname 00:19:45.724 16:30:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:45.724 16:30:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 510507 00:19:45.724 16:30:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:45.724 16:30:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:45.724 16:30:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 510507' 00:19:45.724 killing process with pid 510507 00:19:45.724 16:30:54 -- common/autotest_common.sh@955 -- # kill 510507 00:19:45.724 16:30:54 -- common/autotest_common.sh@960 -- # wait 510507 00:19:45.724 [2024-04-26 16:30:54.695906] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:46.292 16:30:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:46.292 16:30:55 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:46.292 00:19:46.292 real 0m13.099s 00:19:46.292 user 0m31.482s 00:19:46.292 sys 0m5.886s 00:19:46.292 16:30:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:46.292 16:30:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.292 ************************************ 00:19:46.292 END TEST nvmf_shutdown_tc1 00:19:46.292 ************************************ 00:19:46.292 16:30:55 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:46.292 16:30:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:46.292 16:30:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:46.292 16:30:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.292 ************************************ 00:19:46.292 START TEST nvmf_shutdown_tc2 00:19:46.292 ************************************ 00:19:46.292 16:30:55 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:19:46.292 16:30:55 -- target/shutdown.sh@99 -- # starttarget 00:19:46.292 16:30:55 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:46.292 16:30:55 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:46.292 16:30:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.292 16:30:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:46.292 16:30:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:46.292 16:30:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:46.292 16:30:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.292 16:30:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.292 16:30:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.292 16:30:55 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:46.292 16:30:55 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:46.292 16:30:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.292 16:30:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:46.292 16:30:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.292 16:30:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.292 16:30:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.292 16:30:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.292 16:30:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.292 16:30:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.292 16:30:55 -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.292 16:30:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.292 16:30:55 -- nvmf/common.sh@296 -- # e810=() 00:19:46.292 16:30:55 -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.292 16:30:55 -- nvmf/common.sh@297 -- # x722=() 00:19:46.292 16:30:55 -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.292 16:30:55 -- nvmf/common.sh@298 -- # mlx=() 00:19:46.292 16:30:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.292 16:30:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.292 16:30:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.292 16:30:55 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:46.292 16:30:55 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:46.292 16:30:55 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:46.292 16:30:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.292 16:30:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.292 16:30:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:19:46.292 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:19:46.292 16:30:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:46.292 16:30:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.292 16:30:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:19:46.292 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:19:46.292 16:30:55 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:46.292 16:30:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.292 16:30:55 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:46.292 16:30:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.292 16:30:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.292 16:30:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:46.292 16:30:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.292 16:30:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:46.292 Found net devices under 0000:18:00.0: mlx_0_0 00:19:46.292 16:30:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.292 16:30:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.292 16:30:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.292 16:30:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:46.292 16:30:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.293 16:30:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:46.293 Found net devices under 0000:18:00.1: mlx_0_1 00:19:46.293 16:30:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.293 16:30:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:46.293 16:30:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:46.293 16:30:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:46.293 16:30:55 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:46.293 16:30:55 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:46.293 16:30:55 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:46.293 16:30:55 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:46.553 16:30:55 -- nvmf/common.sh@58 -- # uname 00:19:46.553 16:30:55 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:46.553 16:30:55 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:46.553 16:30:55 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:46.553 16:30:55 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:46.553 16:30:55 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:46.553 16:30:55 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:46.553 16:30:55 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:46.553 16:30:55 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:46.553 16:30:55 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:46.553 16:30:55 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:46.553 16:30:55 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:46.553 16:30:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:46.553 16:30:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:46.553 16:30:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:46.553 16:30:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:46.553 16:30:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:46.553 16:30:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:46.553 16:30:55 -- nvmf/common.sh@105 -- # continue 2 00:19:46.553 16:30:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:46.553 16:30:55 -- nvmf/common.sh@105 -- # continue 2 00:19:46.553 16:30:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:46.553 16:30:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:46.553 16:30:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:46.553 16:30:55 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:46.553 16:30:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:46.553 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:46.553 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:19:46.553 altname enp24s0f0np0 00:19:46.553 altname ens785f0np0 00:19:46.553 inet 192.168.100.8/24 scope global mlx_0_0 00:19:46.553 valid_lft forever preferred_lft forever 00:19:46.553 16:30:55 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:46.553 16:30:55 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:46.553 16:30:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:46.553 16:30:55 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:46.553 16:30:55 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:46.553 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:46.553 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:19:46.553 altname enp24s0f1np1 00:19:46.553 altname ens785f1np1 00:19:46.553 inet 192.168.100.9/24 scope global mlx_0_1 00:19:46.553 valid_lft forever preferred_lft forever 00:19:46.553 16:30:55 -- nvmf/common.sh@411 -- # return 0 00:19:46.553 16:30:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:46.553 16:30:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:46.553 16:30:55 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:46.553 16:30:55 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:46.553 16:30:55 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:46.553 16:30:55 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:46.553 16:30:55 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:46.553 16:30:55 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:46.553 16:30:55 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:46.553 16:30:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:46.553 16:30:55 -- nvmf/common.sh@105 -- # continue 2 00:19:46.553 16:30:55 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:46.553 16:30:55 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:46.553 16:30:55 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:46.553 16:30:55 -- nvmf/common.sh@105 -- # continue 2 00:19:46.553 16:30:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:46.553 16:30:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:46.553 16:30:55 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:46.553 16:30:55 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:46.553 16:30:55 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:46.553 16:30:55 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:46.553 16:30:55 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:46.553 16:30:55 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:46.553 192.168.100.9' 00:19:46.553 16:30:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:46.553 192.168.100.9' 00:19:46.553 16:30:55 -- nvmf/common.sh@446 -- # head -n 1 00:19:46.553 16:30:55 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:46.553 16:30:55 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:46.553 192.168.100.9' 00:19:46.553 16:30:55 -- nvmf/common.sh@447 -- # tail -n +2 00:19:46.553 16:30:55 -- nvmf/common.sh@447 -- # head -n 1 00:19:46.553 16:30:55 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:46.553 16:30:55 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:46.553 16:30:55 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:46.553 16:30:55 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:46.553 16:30:55 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:46.553 16:30:55 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:46.553 16:30:55 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:46.553 16:30:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:46.553 16:30:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:46.553 16:30:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.553 16:30:55 -- nvmf/common.sh@470 -- # nvmfpid=511696 00:19:46.553 16:30:55 -- nvmf/common.sh@471 -- # waitforlisten 511696 00:19:46.553 16:30:55 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:46.553 16:30:55 -- common/autotest_common.sh@817 -- # '[' -z 511696 ']' 00:19:46.553 16:30:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.553 16:30:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.553 16:30:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.553 16:30:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.553 16:30:55 -- common/autotest_common.sh@10 -- # set +x 00:19:46.813 [2024-04-26 16:30:55.603170] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:19:46.813 [2024-04-26 16:30:55.603226] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.813 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.813 [2024-04-26 16:30:55.677662] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.813 [2024-04-26 16:30:55.759722] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.813 [2024-04-26 16:30:55.759768] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.813 [2024-04-26 16:30:55.759778] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.813 [2024-04-26 16:30:55.759787] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.813 [2024-04-26 16:30:55.759799] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.813 [2024-04-26 16:30:55.759910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.813 [2024-04-26 16:30:55.759998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.813 [2024-04-26 16:30:55.760098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.813 [2024-04-26 16:30:55.760099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:47.748 16:30:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.748 16:30:56 -- common/autotest_common.sh@850 -- # return 0 00:19:47.748 16:30:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:47.748 16:30:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:47.748 16:30:56 -- common/autotest_common.sh@10 -- # set +x 00:19:47.748 16:30:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.748 16:30:56 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:47.748 16:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.748 16:30:56 -- common/autotest_common.sh@10 -- # set +x 00:19:47.748 [2024-04-26 16:30:56.494066] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1778600/0x177caf0) succeed. 00:19:47.748 [2024-04-26 16:30:56.504512] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1779c40/0x17be180) succeed. 00:19:47.748 16:30:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.748 16:30:56 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:47.748 16:30:56 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:47.748 16:30:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:47.748 16:30:56 -- common/autotest_common.sh@10 -- # set +x 00:19:47.748 16:30:56 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:47.748 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.748 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.748 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.748 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.748 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.748 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.748 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.748 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.748 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.748 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.748 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.748 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.748 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.748 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.748 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.748 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.749 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.749 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.749 16:30:56 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:47.749 16:30:56 -- target/shutdown.sh@28 -- # cat 00:19:47.749 16:30:56 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:47.749 16:30:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.749 16:30:56 -- common/autotest_common.sh@10 -- # set +x 00:19:47.749 Malloc1 00:19:47.749 [2024-04-26 16:30:56.741857] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:47.749 Malloc2 00:19:48.006 Malloc3 00:19:48.006 Malloc4 00:19:48.006 Malloc5 00:19:48.006 Malloc6 00:19:48.006 Malloc7 00:19:48.266 Malloc8 00:19:48.266 Malloc9 00:19:48.266 Malloc10 00:19:48.266 16:30:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:48.266 16:30:57 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:48.266 16:30:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:48.266 16:30:57 -- common/autotest_common.sh@10 -- # set +x 00:19:48.266 16:30:57 -- target/shutdown.sh@103 -- # perfpid=511934 00:19:48.266 16:30:57 -- target/shutdown.sh@104 -- # waitforlisten 511934 /var/tmp/bdevperf.sock 00:19:48.266 16:30:57 -- common/autotest_common.sh@817 -- # '[' -z 511934 ']' 00:19:48.266 16:30:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.266 16:30:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:48.266 16:30:57 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:48.266 16:30:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.266 16:30:57 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:48.266 16:30:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:48.266 16:30:57 -- nvmf/common.sh@521 -- # config=() 00:19:48.266 16:30:57 -- common/autotest_common.sh@10 -- # set +x 00:19:48.266 16:30:57 -- nvmf/common.sh@521 -- # local subsystem config 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 [2024-04-26 16:30:57.251700] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:19:48.266 [2024-04-26 16:30:57.251766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511934 ] 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.266 "trtype": "$TEST_TRANSPORT", 00:19:48.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.266 "adrfam": "ipv4", 00:19:48.266 "trsvcid": "$NVMF_PORT", 00:19:48.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.266 "hdgst": ${hdgst:-false}, 00:19:48.266 "ddgst": ${ddgst:-false} 00:19:48.266 }, 00:19:48.266 "method": "bdev_nvme_attach_controller" 00:19:48.266 } 00:19:48.266 EOF 00:19:48.266 )") 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.266 16:30:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:48.266 16:30:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:48.266 { 00:19:48.266 "params": { 00:19:48.266 "name": "Nvme$subsystem", 00:19:48.267 "trtype": "$TEST_TRANSPORT", 00:19:48.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:48.267 "adrfam": "ipv4", 00:19:48.267 "trsvcid": "$NVMF_PORT", 00:19:48.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:48.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:48.267 "hdgst": ${hdgst:-false}, 00:19:48.267 "ddgst": ${ddgst:-false} 00:19:48.267 }, 00:19:48.267 "method": "bdev_nvme_attach_controller" 00:19:48.267 } 00:19:48.267 EOF 00:19:48.267 )") 00:19:48.267 16:30:57 -- nvmf/common.sh@543 -- # cat 00:19:48.267 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.267 16:30:57 -- nvmf/common.sh@545 -- # jq . 00:19:48.526 16:30:57 -- nvmf/common.sh@546 -- # IFS=, 00:19:48.526 16:30:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme1", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme2", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme3", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme4", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme5", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme6", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme7", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme8", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme9", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 },{ 00:19:48.526 "params": { 00:19:48.526 "name": "Nvme10", 00:19:48.526 "trtype": "rdma", 00:19:48.526 "traddr": "192.168.100.8", 00:19:48.526 "adrfam": "ipv4", 00:19:48.526 "trsvcid": "4420", 00:19:48.526 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:48.526 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:48.526 "hdgst": false, 00:19:48.526 "ddgst": false 00:19:48.526 }, 00:19:48.526 "method": "bdev_nvme_attach_controller" 00:19:48.526 }' 00:19:48.526 [2024-04-26 16:30:57.326651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.526 [2024-04-26 16:30:57.404201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.464 Running I/O for 10 seconds... 00:19:49.464 16:30:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:49.464 16:30:58 -- common/autotest_common.sh@850 -- # return 0 00:19:49.464 16:30:58 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:49.464 16:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.464 16:30:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.464 16:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.464 16:30:58 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:49.464 16:30:58 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:49.464 16:30:58 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:49.464 16:30:58 -- target/shutdown.sh@57 -- # local ret=1 00:19:49.464 16:30:58 -- target/shutdown.sh@58 -- # local i 00:19:49.464 16:30:58 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:49.464 16:30:58 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:49.464 16:30:58 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:49.464 16:30:58 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:49.464 16:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.464 16:30:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.722 16:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.722 16:30:58 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:49.722 16:30:58 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:49.722 16:30:58 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:49.980 16:30:58 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:49.980 16:30:58 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:49.980 16:30:58 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:49.980 16:30:58 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:49.980 16:30:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:49.980 16:30:58 -- common/autotest_common.sh@10 -- # set +x 00:19:49.980 16:30:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:49.980 16:30:58 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:49.980 16:30:58 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:49.980 16:30:58 -- target/shutdown.sh@64 -- # ret=0 00:19:49.980 16:30:58 -- target/shutdown.sh@65 -- # break 00:19:49.980 16:30:58 -- target/shutdown.sh@69 -- # return 0 00:19:49.980 16:30:58 -- target/shutdown.sh@110 -- # killprocess 511934 00:19:49.980 16:30:58 -- common/autotest_common.sh@936 -- # '[' -z 511934 ']' 00:19:49.980 16:30:58 -- common/autotest_common.sh@940 -- # kill -0 511934 00:19:49.980 16:30:58 -- common/autotest_common.sh@941 -- # uname 00:19:49.980 16:30:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:49.980 16:30:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 511934 00:19:50.240 16:30:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:50.240 16:30:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:50.240 16:30:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 511934' 00:19:50.240 killing process with pid 511934 00:19:50.240 16:30:59 -- common/autotest_common.sh@955 -- # kill 511934 00:19:50.240 16:30:59 -- common/autotest_common.sh@960 -- # wait 511934 00:19:50.240 Received shutdown signal, test time was about 0.813556 seconds 00:19:50.240 00:19:50.240 Latency(us) 00:19:50.240 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.240 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme1n1 : 0.79 322.82 20.18 0.00 0.00 195924.15 50377.24 204244.37 00:19:50.240 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme2n1 : 0.80 327.00 20.44 0.00 0.00 188218.72 6496.61 190567.29 00:19:50.240 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme3n1 : 0.80 398.76 24.92 0.00 0.00 151777.59 5613.30 144977.03 00:19:50.240 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme4n1 : 0.80 398.17 24.89 0.00 0.00 148825.13 7066.49 138594.39 00:19:50.240 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme5n1 : 0.81 397.35 24.83 0.00 0.00 146798.15 7864.32 128564.54 00:19:50.240 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme6n1 : 0.81 396.71 24.79 0.00 0.00 143302.57 8434.20 119446.48 00:19:50.240 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme7n1 : 0.81 396.05 24.75 0.00 0.00 140674.27 9004.08 114431.55 00:19:50.240 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme8n1 : 0.81 395.39 24.71 0.00 0.00 137808.14 9516.97 107593.02 00:19:50.240 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme9n1 : 0.81 394.62 24.66 0.00 0.00 135595.41 10314.80 96651.35 00:19:50.240 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:50.240 Verification LBA range: start 0x0 length 0x400 00:19:50.240 Nvme10n1 : 0.81 315.07 19.69 0.00 0.00 165863.85 9118.05 215186.03 00:19:50.240 =================================================================================================================== 00:19:50.240 Total : 3741.94 233.87 0.00 0.00 153769.42 5613.30 215186.03 00:19:50.499 16:30:59 -- target/shutdown.sh@113 -- # sleep 1 00:19:51.432 16:31:00 -- target/shutdown.sh@114 -- # kill -0 511696 00:19:51.432 16:31:00 -- target/shutdown.sh@116 -- # stoptarget 00:19:51.432 16:31:00 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:51.432 16:31:00 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:51.432 16:31:00 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:51.432 16:31:00 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:51.432 16:31:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:51.432 16:31:00 -- nvmf/common.sh@117 -- # sync 00:19:51.432 16:31:00 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:51.432 16:31:00 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:51.432 16:31:00 -- nvmf/common.sh@120 -- # set +e 00:19:51.432 16:31:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:51.432 16:31:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:51.432 rmmod nvme_rdma 00:19:51.432 rmmod nvme_fabrics 00:19:51.432 16:31:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:51.692 16:31:00 -- nvmf/common.sh@124 -- # set -e 00:19:51.692 16:31:00 -- nvmf/common.sh@125 -- # return 0 00:19:51.692 16:31:00 -- nvmf/common.sh@478 -- # '[' -n 511696 ']' 00:19:51.692 16:31:00 -- nvmf/common.sh@479 -- # killprocess 511696 00:19:51.692 16:31:00 -- common/autotest_common.sh@936 -- # '[' -z 511696 ']' 00:19:51.692 16:31:00 -- common/autotest_common.sh@940 -- # kill -0 511696 00:19:51.692 16:31:00 -- common/autotest_common.sh@941 -- # uname 00:19:51.692 16:31:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:51.692 16:31:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 511696 00:19:51.692 16:31:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:51.692 16:31:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:51.692 16:31:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 511696' 00:19:51.692 killing process with pid 511696 00:19:51.692 16:31:00 -- common/autotest_common.sh@955 -- # kill 511696 00:19:51.692 16:31:00 -- common/autotest_common.sh@960 -- # wait 511696 00:19:51.692 [2024-04-26 16:31:00.595861] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:52.262 16:31:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:52.262 16:31:01 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:52.262 00:19:52.262 real 0m5.761s 00:19:52.262 user 0m23.018s 00:19:52.262 sys 0m1.239s 00:19:52.262 16:31:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:52.262 16:31:01 -- common/autotest_common.sh@10 -- # set +x 00:19:52.262 ************************************ 00:19:52.262 END TEST nvmf_shutdown_tc2 00:19:52.262 ************************************ 00:19:52.262 16:31:01 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:52.262 16:31:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:52.262 16:31:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:52.262 16:31:01 -- common/autotest_common.sh@10 -- # set +x 00:19:52.262 ************************************ 00:19:52.262 START TEST nvmf_shutdown_tc3 00:19:52.262 ************************************ 00:19:52.262 16:31:01 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:19:52.262 16:31:01 -- target/shutdown.sh@121 -- # starttarget 00:19:52.262 16:31:01 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:52.262 16:31:01 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:52.262 16:31:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:52.262 16:31:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:52.262 16:31:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:52.262 16:31:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:52.262 16:31:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.262 16:31:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.262 16:31:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:52.262 16:31:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:52.262 16:31:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:52.262 16:31:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:52.262 16:31:01 -- common/autotest_common.sh@10 -- # set +x 00:19:52.262 16:31:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:52.262 16:31:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:52.262 16:31:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:52.262 16:31:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:52.262 16:31:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:52.262 16:31:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:52.262 16:31:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:52.521 16:31:01 -- nvmf/common.sh@295 -- # net_devs=() 00:19:52.521 16:31:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:52.521 16:31:01 -- nvmf/common.sh@296 -- # e810=() 00:19:52.521 16:31:01 -- nvmf/common.sh@296 -- # local -ga e810 00:19:52.522 16:31:01 -- nvmf/common.sh@297 -- # x722=() 00:19:52.522 16:31:01 -- nvmf/common.sh@297 -- # local -ga x722 00:19:52.522 16:31:01 -- nvmf/common.sh@298 -- # mlx=() 00:19:52.522 16:31:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:52.522 16:31:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:52.522 16:31:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:52.522 16:31:01 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:52.522 16:31:01 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:52.522 16:31:01 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:52.522 16:31:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:52.522 16:31:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:19:52.522 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:19:52.522 16:31:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.522 16:31:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:19:52.522 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:19:52.522 16:31:01 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:52.522 16:31:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:52.522 16:31:01 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.522 16:31:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.522 16:31:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.522 16:31:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:52.522 Found net devices under 0000:18:00.0: mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.522 16:31:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:52.522 16:31:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:52.522 16:31:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:52.522 16:31:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:52.522 Found net devices under 0000:18:00.1: mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:52.522 16:31:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:52.522 16:31:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:52.522 16:31:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@409 -- # rdma_device_init 00:19:52.522 16:31:01 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:19:52.522 16:31:01 -- nvmf/common.sh@58 -- # uname 00:19:52.522 16:31:01 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:52.522 16:31:01 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:52.522 16:31:01 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:52.522 16:31:01 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:52.522 16:31:01 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:52.522 16:31:01 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:52.522 16:31:01 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:52.522 16:31:01 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:52.522 16:31:01 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:19:52.522 16:31:01 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:52.522 16:31:01 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:52.522 16:31:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.522 16:31:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:52.522 16:31:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:52.522 16:31:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.522 16:31:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:52.522 16:31:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@105 -- # continue 2 00:19:52.522 16:31:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@105 -- # continue 2 00:19:52.522 16:31:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:52.522 16:31:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.522 16:31:01 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:52.522 16:31:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:52.522 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.522 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:19:52.522 altname enp24s0f0np0 00:19:52.522 altname ens785f0np0 00:19:52.522 inet 192.168.100.8/24 scope global mlx_0_0 00:19:52.522 valid_lft forever preferred_lft forever 00:19:52.522 16:31:01 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:52.522 16:31:01 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.522 16:31:01 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:52.522 16:31:01 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:52.522 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:52.522 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:19:52.522 altname enp24s0f1np1 00:19:52.522 altname ens785f1np1 00:19:52.522 inet 192.168.100.9/24 scope global mlx_0_1 00:19:52.522 valid_lft forever preferred_lft forever 00:19:52.522 16:31:01 -- nvmf/common.sh@411 -- # return 0 00:19:52.522 16:31:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:52.522 16:31:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:52.522 16:31:01 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:19:52.522 16:31:01 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:52.522 16:31:01 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:52.522 16:31:01 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:52.522 16:31:01 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:52.522 16:31:01 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:52.522 16:31:01 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:52.522 16:31:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@105 -- # continue 2 00:19:52.522 16:31:01 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:52.522 16:31:01 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:52.522 16:31:01 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@105 -- # continue 2 00:19:52.522 16:31:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:52.522 16:31:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.522 16:31:01 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:52.522 16:31:01 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:52.522 16:31:01 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:52.522 16:31:01 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:19:52.522 192.168.100.9' 00:19:52.522 16:31:01 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:52.522 192.168.100.9' 00:19:52.522 16:31:01 -- nvmf/common.sh@446 -- # head -n 1 00:19:52.522 16:31:01 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:52.522 16:31:01 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:19:52.522 192.168.100.9' 00:19:52.522 16:31:01 -- nvmf/common.sh@447 -- # tail -n +2 00:19:52.522 16:31:01 -- nvmf/common.sh@447 -- # head -n 1 00:19:52.522 16:31:01 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:52.522 16:31:01 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:19:52.522 16:31:01 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:52.522 16:31:01 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:19:52.522 16:31:01 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:19:52.522 16:31:01 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:19:52.522 16:31:01 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:52.781 16:31:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:52.781 16:31:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:52.781 16:31:01 -- common/autotest_common.sh@10 -- # set +x 00:19:52.781 16:31:01 -- nvmf/common.sh@470 -- # nvmfpid=512603 00:19:52.781 16:31:01 -- nvmf/common.sh@471 -- # waitforlisten 512603 00:19:52.781 16:31:01 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:52.781 16:31:01 -- common/autotest_common.sh@817 -- # '[' -z 512603 ']' 00:19:52.781 16:31:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.781 16:31:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:52.781 16:31:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.781 16:31:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:52.781 16:31:01 -- common/autotest_common.sh@10 -- # set +x 00:19:52.781 [2024-04-26 16:31:01.601578] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:19:52.781 [2024-04-26 16:31:01.601632] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.781 EAL: No free 2048 kB hugepages reported on node 1 00:19:52.781 [2024-04-26 16:31:01.673549] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:52.781 [2024-04-26 16:31:01.757011] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.781 [2024-04-26 16:31:01.757053] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.781 [2024-04-26 16:31:01.757063] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.781 [2024-04-26 16:31:01.757087] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.781 [2024-04-26 16:31:01.757095] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.781 [2024-04-26 16:31:01.757208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:52.781 [2024-04-26 16:31:01.757285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:52.781 [2024-04-26 16:31:01.757387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.781 [2024-04-26 16:31:01.757387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:53.714 16:31:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:53.714 16:31:02 -- common/autotest_common.sh@850 -- # return 0 00:19:53.714 16:31:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:53.714 16:31:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:53.714 16:31:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.714 16:31:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.714 16:31:02 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:53.714 16:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.714 16:31:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.714 [2024-04-26 16:31:02.486984] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c32600/0x1c36af0) succeed. 00:19:53.714 [2024-04-26 16:31:02.497414] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1c33c40/0x1c78180) succeed. 00:19:53.714 16:31:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:53.714 16:31:02 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:53.714 16:31:02 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:53.714 16:31:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:53.714 16:31:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.714 16:31:02 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.714 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.714 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.714 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.714 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.714 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.714 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.715 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.715 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.715 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.715 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.715 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.715 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.715 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.715 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.715 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.715 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.715 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.715 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.715 16:31:02 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:53.715 16:31:02 -- target/shutdown.sh@28 -- # cat 00:19:53.715 16:31:02 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:53.715 16:31:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:53.715 16:31:02 -- common/autotest_common.sh@10 -- # set +x 00:19:53.715 Malloc1 00:19:53.715 [2024-04-26 16:31:02.732860] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:53.973 Malloc2 00:19:53.973 Malloc3 00:19:53.973 Malloc4 00:19:53.973 Malloc5 00:19:53.973 Malloc6 00:19:53.973 Malloc7 00:19:54.232 Malloc8 00:19:54.232 Malloc9 00:19:54.232 Malloc10 00:19:54.232 16:31:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:54.232 16:31:03 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:54.232 16:31:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:54.232 16:31:03 -- common/autotest_common.sh@10 -- # set +x 00:19:54.232 16:31:03 -- target/shutdown.sh@125 -- # perfpid=512838 00:19:54.232 16:31:03 -- target/shutdown.sh@126 -- # waitforlisten 512838 /var/tmp/bdevperf.sock 00:19:54.232 16:31:03 -- common/autotest_common.sh@817 -- # '[' -z 512838 ']' 00:19:54.232 16:31:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.232 16:31:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.232 16:31:03 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:54.232 16:31:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.232 16:31:03 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:54.232 16:31:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.232 16:31:03 -- common/autotest_common.sh@10 -- # set +x 00:19:54.232 16:31:03 -- nvmf/common.sh@521 -- # config=() 00:19:54.232 16:31:03 -- nvmf/common.sh@521 -- # local subsystem config 00:19:54.232 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.232 { 00:19:54.232 "params": { 00:19:54.232 "name": "Nvme$subsystem", 00:19:54.232 "trtype": "$TEST_TRANSPORT", 00:19:54.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.232 "adrfam": "ipv4", 00:19:54.232 "trsvcid": "$NVMF_PORT", 00:19:54.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.232 "hdgst": ${hdgst:-false}, 00:19:54.232 "ddgst": ${ddgst:-false} 00:19:54.232 }, 00:19:54.232 "method": "bdev_nvme_attach_controller" 00:19:54.232 } 00:19:54.232 EOF 00:19:54.232 )") 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.232 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.232 { 00:19:54.232 "params": { 00:19:54.232 "name": "Nvme$subsystem", 00:19:54.232 "trtype": "$TEST_TRANSPORT", 00:19:54.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.232 "adrfam": "ipv4", 00:19:54.232 "trsvcid": "$NVMF_PORT", 00:19:54.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.232 "hdgst": ${hdgst:-false}, 00:19:54.232 "ddgst": ${ddgst:-false} 00:19:54.232 }, 00:19:54.232 "method": "bdev_nvme_attach_controller" 00:19:54.232 } 00:19:54.232 EOF 00:19:54.232 )") 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.232 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.232 { 00:19:54.232 "params": { 00:19:54.232 "name": "Nvme$subsystem", 00:19:54.232 "trtype": "$TEST_TRANSPORT", 00:19:54.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.232 "adrfam": "ipv4", 00:19:54.232 "trsvcid": "$NVMF_PORT", 00:19:54.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.232 "hdgst": ${hdgst:-false}, 00:19:54.232 "ddgst": ${ddgst:-false} 00:19:54.232 }, 00:19:54.232 "method": "bdev_nvme_attach_controller" 00:19:54.232 } 00:19:54.232 EOF 00:19:54.232 )") 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.232 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.232 { 00:19:54.232 "params": { 00:19:54.232 "name": "Nvme$subsystem", 00:19:54.232 "trtype": "$TEST_TRANSPORT", 00:19:54.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.232 "adrfam": "ipv4", 00:19:54.232 "trsvcid": "$NVMF_PORT", 00:19:54.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.232 "hdgst": ${hdgst:-false}, 00:19:54.232 "ddgst": ${ddgst:-false} 00:19:54.232 }, 00:19:54.232 "method": "bdev_nvme_attach_controller" 00:19:54.232 } 00:19:54.232 EOF 00:19:54.232 )") 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.232 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.232 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.232 { 00:19:54.232 "params": { 00:19:54.232 "name": "Nvme$subsystem", 00:19:54.232 "trtype": "$TEST_TRANSPORT", 00:19:54.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.232 "adrfam": "ipv4", 00:19:54.232 "trsvcid": "$NVMF_PORT", 00:19:54.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.232 "hdgst": ${hdgst:-false}, 00:19:54.232 "ddgst": ${ddgst:-false} 00:19:54.232 }, 00:19:54.232 "method": "bdev_nvme_attach_controller" 00:19:54.232 } 00:19:54.233 EOF 00:19:54.233 )") 00:19:54.233 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.233 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.233 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.233 { 00:19:54.233 "params": { 00:19:54.233 "name": "Nvme$subsystem", 00:19:54.233 "trtype": "$TEST_TRANSPORT", 00:19:54.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.233 "adrfam": "ipv4", 00:19:54.233 "trsvcid": "$NVMF_PORT", 00:19:54.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.233 "hdgst": ${hdgst:-false}, 00:19:54.233 "ddgst": ${ddgst:-false} 00:19:54.233 }, 00:19:54.233 "method": "bdev_nvme_attach_controller" 00:19:54.233 } 00:19:54.233 EOF 00:19:54.233 )") 00:19:54.233 [2024-04-26 16:31:03.243359] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:19:54.233 [2024-04-26 16:31:03.243420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid512838 ] 00:19:54.233 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.233 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.233 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.233 { 00:19:54.233 "params": { 00:19:54.233 "name": "Nvme$subsystem", 00:19:54.233 "trtype": "$TEST_TRANSPORT", 00:19:54.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.233 "adrfam": "ipv4", 00:19:54.233 "trsvcid": "$NVMF_PORT", 00:19:54.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.233 "hdgst": ${hdgst:-false}, 00:19:54.233 "ddgst": ${ddgst:-false} 00:19:54.233 }, 00:19:54.233 "method": "bdev_nvme_attach_controller" 00:19:54.233 } 00:19:54.233 EOF 00:19:54.233 )") 00:19:54.233 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.492 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.492 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.492 { 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme$subsystem", 00:19:54.492 "trtype": "$TEST_TRANSPORT", 00:19:54.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "$NVMF_PORT", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.492 "hdgst": ${hdgst:-false}, 00:19:54.492 "ddgst": ${ddgst:-false} 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 } 00:19:54.492 EOF 00:19:54.492 )") 00:19:54.492 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.492 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.492 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.492 { 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme$subsystem", 00:19:54.492 "trtype": "$TEST_TRANSPORT", 00:19:54.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "$NVMF_PORT", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.492 "hdgst": ${hdgst:-false}, 00:19:54.492 "ddgst": ${ddgst:-false} 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 } 00:19:54.492 EOF 00:19:54.492 )") 00:19:54.492 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.492 16:31:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:54.492 16:31:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:54.492 { 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme$subsystem", 00:19:54.492 "trtype": "$TEST_TRANSPORT", 00:19:54.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "$NVMF_PORT", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.492 "hdgst": ${hdgst:-false}, 00:19:54.492 "ddgst": ${ddgst:-false} 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 } 00:19:54.492 EOF 00:19:54.492 )") 00:19:54.492 16:31:03 -- nvmf/common.sh@543 -- # cat 00:19:54.492 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.492 16:31:03 -- nvmf/common.sh@545 -- # jq . 00:19:54.492 16:31:03 -- nvmf/common.sh@546 -- # IFS=, 00:19:54.492 16:31:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme1", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme2", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme3", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme4", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme5", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme6", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme7", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme8", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme9", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 },{ 00:19:54.492 "params": { 00:19:54.492 "name": "Nvme10", 00:19:54.492 "trtype": "rdma", 00:19:54.492 "traddr": "192.168.100.8", 00:19:54.492 "adrfam": "ipv4", 00:19:54.492 "trsvcid": "4420", 00:19:54.492 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:54.492 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:54.492 "hdgst": false, 00:19:54.492 "ddgst": false 00:19:54.492 }, 00:19:54.492 "method": "bdev_nvme_attach_controller" 00:19:54.492 }' 00:19:54.492 [2024-04-26 16:31:03.318905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.492 [2024-04-26 16:31:03.397251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.428 Running I/O for 10 seconds... 00:19:55.428 16:31:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.428 16:31:04 -- common/autotest_common.sh@850 -- # return 0 00:19:55.428 16:31:04 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:55.428 16:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.428 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:19:55.428 16:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.428 16:31:04 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.428 16:31:04 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:55.428 16:31:04 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:55.428 16:31:04 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:55.428 16:31:04 -- target/shutdown.sh@57 -- # local ret=1 00:19:55.428 16:31:04 -- target/shutdown.sh@58 -- # local i 00:19:55.428 16:31:04 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:55.428 16:31:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:55.428 16:31:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:55.428 16:31:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:55.428 16:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.428 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:19:55.687 16:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.687 16:31:04 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:55.687 16:31:04 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:55.687 16:31:04 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:55.946 16:31:04 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:55.946 16:31:04 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:55.946 16:31:04 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:55.946 16:31:04 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:55.946 16:31:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.946 16:31:04 -- common/autotest_common.sh@10 -- # set +x 00:19:55.946 16:31:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.946 16:31:04 -- target/shutdown.sh@60 -- # read_io_count=147 00:19:55.946 16:31:04 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:19:55.946 16:31:04 -- target/shutdown.sh@64 -- # ret=0 00:19:55.946 16:31:04 -- target/shutdown.sh@65 -- # break 00:19:55.946 16:31:04 -- target/shutdown.sh@69 -- # return 0 00:19:55.946 16:31:04 -- target/shutdown.sh@135 -- # killprocess 512603 00:19:55.946 16:31:04 -- common/autotest_common.sh@936 -- # '[' -z 512603 ']' 00:19:55.946 16:31:04 -- common/autotest_common.sh@940 -- # kill -0 512603 00:19:55.946 16:31:04 -- common/autotest_common.sh@941 -- # uname 00:19:55.946 16:31:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:55.946 16:31:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 512603 00:19:56.205 16:31:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:56.205 16:31:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:56.205 16:31:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 512603' 00:19:56.205 killing process with pid 512603 00:19:56.205 16:31:05 -- common/autotest_common.sh@955 -- # kill 512603 00:19:56.205 16:31:05 -- common/autotest_common.sh@960 -- # wait 512603 00:19:56.205 [2024-04-26 16:31:05.119844] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:19:56.772 16:31:05 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:56.772 16:31:05 -- target/shutdown.sh@139 -- # sleep 1 00:19:57.031 [2024-04-26 16:31:06.055329] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f00 was disconnected and freed. reset controller. 00:19:57.296 [2024-04-26 16:31:06.056911] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256cc0 was disconnected and freed. reset controller. 00:19:57.296 [2024-04-26 16:31:06.058386] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256a80 was disconnected and freed. reset controller. 00:19:57.296 [2024-04-26 16:31:06.060130] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256840 was disconnected and freed. reset controller. 00:19:57.296 [2024-04-26 16:31:06.061832] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256600 was disconnected and freed. reset controller. 00:19:57.296 [2024-04-26 16:31:06.063533] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192563c0 was disconnected and freed. reset controller. 00:19:57.296 [2024-04-26 16:31:06.065388] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:19:57.296 [2024-04-26 16:31:06.066906] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806c00 was disconnected and freed. reset controller. 00:19:57.296 [2024-04-26 16:31:06.067009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x182d00 00:19:57.296 [2024-04-26 16:31:06.067427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.296 [2024-04-26 16:31:06.067444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.067971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.067987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.068003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.068034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.068064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x182d00 00:19:57.297 [2024-04-26 16:31:06.068094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.297 [2024-04-26 16:31:06.068478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x182a00 00:19:57.297 [2024-04-26 16:31:06.068492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.068976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.068992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.069006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.069025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x182a00 00:19:57.298 [2024-04-26 16:31:06.069038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.069054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.069069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.069085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x182000 00:19:57.298 [2024-04-26 16:31:06.069099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32508 cdw0:5f646580 sqhd:208c p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.070826] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8069c0 was disconnected and freed. reset controller. 00:19:57.298 [2024-04-26 16:31:06.070854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.070868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.070888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.070902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.070919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.070933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.070949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.070963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.070979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.070992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.071022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.071052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.071085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.071115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.071145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.071174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.071204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x182900 00:19:57.298 [2024-04-26 16:31:06.071235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183000 00:19:57.298 [2024-04-26 16:31:06.071267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x183000 00:19:57.298 [2024-04-26 16:31:06.071298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.298 [2024-04-26 16:31:06.071314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.071978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.071992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.072021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.072051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.072081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.072112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.072141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.072173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.072203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183000 00:19:57.299 [2024-04-26 16:31:06.072233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x181300 00:19:57.299 [2024-04-26 16:31:06.072263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x181300 00:19:57.299 [2024-04-26 16:31:06.072296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x181300 00:19:57.299 [2024-04-26 16:31:06.072326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x181300 00:19:57.299 [2024-04-26 16:31:06.072360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x181300 00:19:57.299 [2024-04-26 16:31:06.072391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x181300 00:19:57.299 [2024-04-26 16:31:06.072421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x181300 00:19:57.299 [2024-04-26 16:31:06.072453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.299 [2024-04-26 16:31:06.072469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x181300 00:19:57.299 [2024-04-26 16:31:06.072482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x181300 00:19:57.300 [2024-04-26 16:31:06.072816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.072832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x182900 00:19:57.300 [2024-04-26 16:31:06.072846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:417f440 sqhd:e130 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.074529] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806780 was disconnected and freed. reset controller. 00:19:57.300 [2024-04-26 16:31:06.074624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.074659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.074674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.074689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.074703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.074717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.074731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.076437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.300 [2024-04-26 16:31:06.076456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:57.300 [2024-04-26 16:31:06.076470] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.300 [2024-04-26 16:31:06.076491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.076506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.076520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.076534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.076549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.076562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.076576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.076590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.077923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.300 [2024-04-26 16:31:06.077939] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:57.300 [2024-04-26 16:31:06.077953] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.300 [2024-04-26 16:31:06.077973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.077987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.078001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.078019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.078034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.078047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.078062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.078075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.079318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.300 [2024-04-26 16:31:06.079336] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:57.300 [2024-04-26 16:31:06.079353] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.300 [2024-04-26 16:31:06.079373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.079409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.079424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.079438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.079452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.079466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.079481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.079494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.080884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.300 [2024-04-26 16:31:06.080901] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:57.300 [2024-04-26 16:31:06.080914] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.300 [2024-04-26 16:31:06.080933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.080948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.080963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.080976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.080991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.081005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.081019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.300 [2024-04-26 16:31:06.081036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.300 [2024-04-26 16:31:06.082271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.301 [2024-04-26 16:31:06.082289] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.301 [2024-04-26 16:31:06.082302] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.082322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.082336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.082356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.082369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.082383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.082397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.082411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.082425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.083676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.301 [2024-04-26 16:31:06.083694] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:57.301 [2024-04-26 16:31:06.083707] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.083726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.083740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.083755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.083768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.083782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.083796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.083810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.083823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.085218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.301 [2024-04-26 16:31:06.085258] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:57.301 [2024-04-26 16:31:06.085288] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.085332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.085383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.085417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.085449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.085481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.085512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.085544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.085575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.086958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.301 [2024-04-26 16:31:06.086998] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:57.301 [2024-04-26 16:31:06.087028] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.087071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.087103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.087136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.087167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.087210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.087223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.087238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.087251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.088465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.301 [2024-04-26 16:31:06.088504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:57.301 [2024-04-26 16:31:06.088535] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.088577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.088610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.088643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.088674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.088707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.088744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.088777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:57.301 [2024-04-26 16:31:06.088809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:59758 cdw0:0 sqhd:b700 p:0 m:0 dnr:0 00:19:57.301 [2024-04-26 16:31:06.107198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:57.301 [2024-04-26 16:31:06.107250] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:57.301 [2024-04-26 16:31:06.107282] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.110699] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:57.301 [2024-04-26 16:31:06.110726] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:57.301 [2024-04-26 16:31:06.110742] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:57.301 [2024-04-26 16:31:06.110758] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:57.301 [2024-04-26 16:31:06.110773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:57.301 [2024-04-26 16:31:06.110788] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:57.301 [2024-04-26 16:31:06.110870] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.110899] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.110917] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.110935] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:57.301 [2024-04-26 16:31:06.111081] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:57.302 [2024-04-26 16:31:06.111099] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:57.302 [2024-04-26 16:31:06.111115] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:57.302 [2024-04-26 16:31:06.111134] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:57.302 [2024-04-26 16:31:06.121257] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.121318] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.121363] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:19:57.302 [2024-04-26 16:31:06.121479] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.121514] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.121540] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5380 00:19:57.302 [2024-04-26 16:31:06.121644] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.121677] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.121702] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba540 00:19:57.302 [2024-04-26 16:31:06.121832] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.121865] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.121890] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c00 00:19:57.302 [2024-04-26 16:31:06.122002] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.122036] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.122061] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c60c0 00:19:57.302 [2024-04-26 16:31:06.122221] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.122254] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.122279] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd440 00:19:57.302 [2024-04-26 16:31:06.122470] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.122486] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.122496] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192b54c0 00:19:57.302 [2024-04-26 16:31:06.122601] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.122615] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.122625] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c140 00:19:57.302 [2024-04-26 16:31:06.122714] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.122728] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.122738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019298cc0 00:19:57.302 [2024-04-26 16:31:06.122845] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:57.302 [2024-04-26 16:31:06.122859] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:57.302 [2024-04-26 16:31:06.122869] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f280 00:19:57.302 task offset: 40960 on job bdev=Nvme8n1 fails 00:19:57.302 00:19:57.302 Latency(us) 00:19:57.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.302 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme1n1 ended in about 1.88 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme1n1 : 1.88 136.31 8.52 34.08 0.00 372008.34 39891.48 1050399.61 00:19:57.302 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme2n1 ended in about 1.88 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme2n1 : 1.88 137.31 8.58 34.06 0.00 366620.91 5527.82 1050399.61 00:19:57.302 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme3n1 ended in about 1.88 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme3n1 : 1.88 136.19 8.51 34.05 0.00 365870.04 48781.58 1050399.61 00:19:57.302 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme4n1 ended in about 1.88 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme4n1 : 1.88 153.14 9.57 34.03 0.00 329813.36 6667.58 1050399.61 00:19:57.302 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme5n1 ended in about 1.88 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme5n1 : 1.88 142.97 8.94 34.02 0.00 345700.73 9459.98 1050399.61 00:19:57.302 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme6n1 ended in about 1.88 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme6n1 : 1.88 150.88 9.43 34.00 0.00 327971.00 14246.96 1050399.61 00:19:57.302 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme7n1 ended in about 1.88 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme7n1 : 1.88 152.93 9.56 33.99 0.00 321470.59 21997.30 1050399.61 00:19:57.302 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme8n1 ended in about 1.88 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme8n1 : 1.88 148.62 9.29 33.97 0.00 325979.31 32141.13 1043105.17 00:19:57.302 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme9n1 ended in about 1.86 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme9n1 : 1.86 137.88 8.62 34.47 0.00 344900.30 64282.27 1094166.26 00:19:57.302 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:57.302 Job: Nvme10n1 ended in about 1.86 seconds with error 00:19:57.302 Verification LBA range: start 0x0 length 0x400 00:19:57.302 Nvme10n1 : 1.86 103.34 6.46 34.45 0.00 427433.63 64738.17 1079577.38 00:19:57.302 =================================================================================================================== 00:19:57.302 Total : 1399.56 87.47 341.10 0.00 350353.60 5527.82 1094166.26 00:19:57.302 [2024-04-26 16:31:06.158694] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:57.561 16:31:06 -- target/shutdown.sh@142 -- # kill -9 512838 00:19:57.561 16:31:06 -- target/shutdown.sh@144 -- # stoptarget 00:19:57.561 16:31:06 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:57.561 16:31:06 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:57.561 16:31:06 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:57.561 16:31:06 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:57.561 16:31:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:57.561 16:31:06 -- nvmf/common.sh@117 -- # sync 00:19:57.561 16:31:06 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:57.561 16:31:06 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:57.561 16:31:06 -- nvmf/common.sh@120 -- # set +e 00:19:57.561 16:31:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.561 16:31:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:57.561 rmmod nvme_rdma 00:19:57.561 rmmod nvme_fabrics 00:19:57.821 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 512838 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:19:57.821 16:31:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.821 16:31:06 -- nvmf/common.sh@124 -- # set -e 00:19:57.821 16:31:06 -- nvmf/common.sh@125 -- # return 0 00:19:57.821 16:31:06 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:57.821 16:31:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:57.821 16:31:06 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:19:57.821 00:19:57.821 real 0m5.338s 00:19:57.821 user 0m17.935s 00:19:57.821 sys 0m1.426s 00:19:57.821 16:31:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:57.821 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:19:57.821 ************************************ 00:19:57.821 END TEST nvmf_shutdown_tc3 00:19:57.821 ************************************ 00:19:57.821 16:31:06 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:57.821 00:19:57.821 real 0m24.951s 00:19:57.821 user 1m12.721s 00:19:57.821 sys 0m8.983s 00:19:57.821 16:31:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:57.821 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:19:57.821 ************************************ 00:19:57.821 END TEST nvmf_shutdown 00:19:57.821 ************************************ 00:19:57.821 16:31:06 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:19:57.821 16:31:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:57.821 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:19:57.821 16:31:06 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:19:57.821 16:31:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:57.821 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:19:57.821 16:31:06 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:19:57.821 16:31:06 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:57.821 16:31:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:57.821 16:31:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:57.821 16:31:06 -- common/autotest_common.sh@10 -- # set +x 00:19:58.081 ************************************ 00:19:58.081 START TEST nvmf_multicontroller 00:19:58.081 ************************************ 00:19:58.081 16:31:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:58.081 * Looking for test storage... 00:19:58.081 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:58.081 16:31:07 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.081 16:31:07 -- nvmf/common.sh@7 -- # uname -s 00:19:58.081 16:31:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.081 16:31:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.081 16:31:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.081 16:31:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.081 16:31:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.081 16:31:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.081 16:31:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.081 16:31:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.081 16:31:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.081 16:31:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.081 16:31:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:19:58.081 16:31:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:19:58.081 16:31:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.081 16:31:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.081 16:31:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.081 16:31:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.081 16:31:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:58.081 16:31:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.081 16:31:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.081 16:31:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.081 16:31:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.081 16:31:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.081 16:31:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.081 16:31:07 -- paths/export.sh@5 -- # export PATH 00:19:58.081 16:31:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.081 16:31:07 -- nvmf/common.sh@47 -- # : 0 00:19:58.081 16:31:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:58.081 16:31:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:58.081 16:31:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.081 16:31:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.081 16:31:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.081 16:31:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:58.081 16:31:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:58.081 16:31:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:58.081 16:31:07 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.081 16:31:07 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.081 16:31:07 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:58.081 16:31:07 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:58.081 16:31:07 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.081 16:31:07 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:19:58.081 16:31:07 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:58.081 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:58.081 16:31:07 -- host/multicontroller.sh@20 -- # exit 0 00:19:58.081 00:19:58.081 real 0m0.141s 00:19:58.081 user 0m0.052s 00:19:58.081 sys 0m0.100s 00:19:58.081 16:31:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:58.081 16:31:07 -- common/autotest_common.sh@10 -- # set +x 00:19:58.081 ************************************ 00:19:58.081 END TEST nvmf_multicontroller 00:19:58.081 ************************************ 00:19:58.082 16:31:07 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:58.082 16:31:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:58.082 16:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:58.082 16:31:07 -- common/autotest_common.sh@10 -- # set +x 00:19:58.341 ************************************ 00:19:58.341 START TEST nvmf_aer 00:19:58.341 ************************************ 00:19:58.341 16:31:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:58.600 * Looking for test storage... 00:19:58.601 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:58.601 16:31:07 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.601 16:31:07 -- nvmf/common.sh@7 -- # uname -s 00:19:58.601 16:31:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.601 16:31:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.601 16:31:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.601 16:31:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.601 16:31:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.601 16:31:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.601 16:31:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.601 16:31:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.601 16:31:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.601 16:31:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.601 16:31:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:19:58.601 16:31:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:19:58.601 16:31:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.601 16:31:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.601 16:31:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.601 16:31:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.601 16:31:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:58.601 16:31:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.601 16:31:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.601 16:31:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.601 16:31:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.601 16:31:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.601 16:31:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.601 16:31:07 -- paths/export.sh@5 -- # export PATH 00:19:58.601 16:31:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.601 16:31:07 -- nvmf/common.sh@47 -- # : 0 00:19:58.601 16:31:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:58.601 16:31:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:58.601 16:31:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.601 16:31:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.601 16:31:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.601 16:31:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:58.601 16:31:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:58.601 16:31:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:58.601 16:31:07 -- host/aer.sh@11 -- # nvmftestinit 00:19:58.601 16:31:07 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:19:58.601 16:31:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.601 16:31:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:58.601 16:31:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:58.601 16:31:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:58.601 16:31:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.601 16:31:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.601 16:31:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.601 16:31:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:58.601 16:31:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:58.601 16:31:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:58.601 16:31:07 -- common/autotest_common.sh@10 -- # set +x 00:20:05.169 16:31:13 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:05.169 16:31:13 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:05.169 16:31:13 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:05.169 16:31:13 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:05.169 16:31:13 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:05.169 16:31:13 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:05.169 16:31:13 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:05.169 16:31:13 -- nvmf/common.sh@295 -- # net_devs=() 00:20:05.169 16:31:13 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:05.169 16:31:13 -- nvmf/common.sh@296 -- # e810=() 00:20:05.169 16:31:13 -- nvmf/common.sh@296 -- # local -ga e810 00:20:05.169 16:31:13 -- nvmf/common.sh@297 -- # x722=() 00:20:05.169 16:31:13 -- nvmf/common.sh@297 -- # local -ga x722 00:20:05.169 16:31:13 -- nvmf/common.sh@298 -- # mlx=() 00:20:05.169 16:31:13 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:05.169 16:31:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.169 16:31:13 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:05.169 16:31:13 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:05.169 16:31:13 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:05.169 16:31:13 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:05.169 16:31:13 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:05.169 16:31:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:20:05.169 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:20:05.169 16:31:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:05.169 16:31:13 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:20:05.169 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:20:05.169 16:31:13 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:05.169 16:31:13 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:05.169 16:31:13 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.169 16:31:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:05.169 16:31:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.169 16:31:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:05.169 Found net devices under 0000:18:00.0: mlx_0_0 00:20:05.169 16:31:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.169 16:31:13 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.169 16:31:13 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:05.169 16:31:13 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.169 16:31:13 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:05.169 Found net devices under 0000:18:00.1: mlx_0_1 00:20:05.169 16:31:13 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.169 16:31:13 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:05.169 16:31:13 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:05.169 16:31:13 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:05.169 16:31:13 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:05.169 16:31:13 -- nvmf/common.sh@58 -- # uname 00:20:05.169 16:31:13 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:05.169 16:31:13 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:05.169 16:31:13 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:05.169 16:31:13 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:05.169 16:31:13 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:05.169 16:31:13 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:05.169 16:31:13 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:05.169 16:31:13 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:05.169 16:31:13 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:05.169 16:31:13 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:05.169 16:31:13 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:05.169 16:31:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:05.169 16:31:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:05.169 16:31:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:05.169 16:31:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:05.169 16:31:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:05.169 16:31:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:05.169 16:31:13 -- nvmf/common.sh@105 -- # continue 2 00:20:05.169 16:31:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.169 16:31:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:05.169 16:31:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:05.169 16:31:13 -- nvmf/common.sh@105 -- # continue 2 00:20:05.170 16:31:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:05.170 16:31:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:05.170 16:31:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:05.170 16:31:13 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:05.170 16:31:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:05.170 16:31:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:05.170 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:05.170 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:20:05.170 altname enp24s0f0np0 00:20:05.170 altname ens785f0np0 00:20:05.170 inet 192.168.100.8/24 scope global mlx_0_0 00:20:05.170 valid_lft forever preferred_lft forever 00:20:05.170 16:31:13 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:05.170 16:31:13 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:05.170 16:31:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:05.170 16:31:13 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:05.170 16:31:13 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:05.170 16:31:13 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:05.170 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:05.170 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:20:05.170 altname enp24s0f1np1 00:20:05.170 altname ens785f1np1 00:20:05.170 inet 192.168.100.9/24 scope global mlx_0_1 00:20:05.170 valid_lft forever preferred_lft forever 00:20:05.170 16:31:13 -- nvmf/common.sh@411 -- # return 0 00:20:05.170 16:31:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:05.170 16:31:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:05.170 16:31:13 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:05.170 16:31:13 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:05.170 16:31:13 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:05.170 16:31:13 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:05.170 16:31:13 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:05.170 16:31:13 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:05.170 16:31:13 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:05.170 16:31:13 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:05.170 16:31:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:05.170 16:31:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.170 16:31:13 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:05.170 16:31:13 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:05.170 16:31:13 -- nvmf/common.sh@105 -- # continue 2 00:20:05.170 16:31:13 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:05.170 16:31:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.170 16:31:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:05.170 16:31:13 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:05.170 16:31:13 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:05.170 16:31:13 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:05.170 16:31:13 -- nvmf/common.sh@105 -- # continue 2 00:20:05.170 16:31:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:05.170 16:31:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:05.170 16:31:13 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:05.170 16:31:13 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:05.170 16:31:13 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:05.170 16:31:13 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:05.170 16:31:13 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:05.170 16:31:13 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:05.170 192.168.100.9' 00:20:05.170 16:31:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:05.170 192.168.100.9' 00:20:05.170 16:31:13 -- nvmf/common.sh@446 -- # head -n 1 00:20:05.170 16:31:13 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:05.170 16:31:13 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:05.170 192.168.100.9' 00:20:05.170 16:31:13 -- nvmf/common.sh@447 -- # tail -n +2 00:20:05.170 16:31:13 -- nvmf/common.sh@447 -- # head -n 1 00:20:05.170 16:31:13 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:05.170 16:31:13 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:05.170 16:31:13 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:05.170 16:31:13 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:05.170 16:31:13 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:05.170 16:31:13 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:05.170 16:31:13 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:05.170 16:31:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:05.170 16:31:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:05.170 16:31:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.170 16:31:13 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:05.170 16:31:13 -- nvmf/common.sh@470 -- # nvmfpid=516429 00:20:05.170 16:31:13 -- nvmf/common.sh@471 -- # waitforlisten 516429 00:20:05.170 16:31:13 -- common/autotest_common.sh@817 -- # '[' -z 516429 ']' 00:20:05.170 16:31:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.170 16:31:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:05.170 16:31:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.170 16:31:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:05.170 16:31:13 -- common/autotest_common.sh@10 -- # set +x 00:20:05.170 [2024-04-26 16:31:13.445204] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:05.170 [2024-04-26 16:31:13.445259] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.170 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.170 [2024-04-26 16:31:13.518099] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.170 [2024-04-26 16:31:13.604936] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.170 [2024-04-26 16:31:13.604977] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.170 [2024-04-26 16:31:13.604987] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.170 [2024-04-26 16:31:13.605012] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.170 [2024-04-26 16:31:13.605019] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.170 [2024-04-26 16:31:13.605070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.170 [2024-04-26 16:31:13.605154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.170 [2024-04-26 16:31:13.605232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.170 [2024-04-26 16:31:13.605233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.429 16:31:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:05.429 16:31:14 -- common/autotest_common.sh@850 -- # return 0 00:20:05.429 16:31:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:05.429 16:31:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:05.429 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.429 16:31:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.429 16:31:14 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:05.429 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.429 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.429 [2024-04-26 16:31:14.346593] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x143f310/0x1443800) succeed. 00:20:05.429 [2024-04-26 16:31:14.356851] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1440950/0x1484e90) succeed. 00:20:05.688 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.688 16:31:14 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:05.688 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.688 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.688 Malloc0 00:20:05.688 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.688 16:31:14 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:05.688 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.688 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.688 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.688 16:31:14 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:05.688 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.688 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.688 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.688 16:31:14 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:05.688 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.688 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.688 [2024-04-26 16:31:14.523409] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:05.688 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.688 16:31:14 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:05.688 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.688 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.689 [2024-04-26 16:31:14.531192] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:05.689 [ 00:20:05.689 { 00:20:05.689 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:05.689 "subtype": "Discovery", 00:20:05.689 "listen_addresses": [], 00:20:05.689 "allow_any_host": true, 00:20:05.689 "hosts": [] 00:20:05.689 }, 00:20:05.689 { 00:20:05.689 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.689 "subtype": "NVMe", 00:20:05.689 "listen_addresses": [ 00:20:05.689 { 00:20:05.689 "transport": "RDMA", 00:20:05.689 "trtype": "RDMA", 00:20:05.689 "adrfam": "IPv4", 00:20:05.689 "traddr": "192.168.100.8", 00:20:05.689 "trsvcid": "4420" 00:20:05.689 } 00:20:05.689 ], 00:20:05.689 "allow_any_host": true, 00:20:05.689 "hosts": [], 00:20:05.689 "serial_number": "SPDK00000000000001", 00:20:05.689 "model_number": "SPDK bdev Controller", 00:20:05.689 "max_namespaces": 2, 00:20:05.689 "min_cntlid": 1, 00:20:05.689 "max_cntlid": 65519, 00:20:05.689 "namespaces": [ 00:20:05.689 { 00:20:05.689 "nsid": 1, 00:20:05.689 "bdev_name": "Malloc0", 00:20:05.689 "name": "Malloc0", 00:20:05.689 "nguid": "0449CC00EA444E54819E148CC7F3D8D6", 00:20:05.689 "uuid": "0449cc00-ea44-4e54-819e-148cc7f3d8d6" 00:20:05.689 } 00:20:05.689 ] 00:20:05.689 } 00:20:05.689 ] 00:20:05.689 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.689 16:31:14 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:05.689 16:31:14 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:05.689 16:31:14 -- host/aer.sh@33 -- # aerpid=516514 00:20:05.689 16:31:14 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:05.689 16:31:14 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:05.689 16:31:14 -- common/autotest_common.sh@1251 -- # local i=0 00:20:05.689 16:31:14 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.689 16:31:14 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:20:05.689 16:31:14 -- common/autotest_common.sh@1254 -- # i=1 00:20:05.689 16:31:14 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:20:05.689 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.689 16:31:14 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.689 16:31:14 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:20:05.689 16:31:14 -- common/autotest_common.sh@1254 -- # i=2 00:20:05.689 16:31:14 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:20:05.948 16:31:14 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.948 16:31:14 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:05.948 16:31:14 -- common/autotest_common.sh@1262 -- # return 0 00:20:05.948 16:31:14 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:05.948 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.948 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.948 Malloc1 00:20:05.948 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.948 16:31:14 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:05.948 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.948 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.948 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.948 16:31:14 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:05.948 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.948 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.948 [ 00:20:05.948 { 00:20:05.948 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:05.948 "subtype": "Discovery", 00:20:05.948 "listen_addresses": [], 00:20:05.948 "allow_any_host": true, 00:20:05.948 "hosts": [] 00:20:05.948 }, 00:20:05.948 { 00:20:05.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.948 "subtype": "NVMe", 00:20:05.948 "listen_addresses": [ 00:20:05.948 { 00:20:05.948 "transport": "RDMA", 00:20:05.948 "trtype": "RDMA", 00:20:05.948 "adrfam": "IPv4", 00:20:05.948 "traddr": "192.168.100.8", 00:20:05.948 "trsvcid": "4420" 00:20:05.948 } 00:20:05.948 ], 00:20:05.948 "allow_any_host": true, 00:20:05.948 "hosts": [], 00:20:05.948 "serial_number": "SPDK00000000000001", 00:20:05.948 "model_number": "SPDK bdev Controller", 00:20:05.948 "max_namespaces": 2, 00:20:05.948 "min_cntlid": 1, 00:20:05.948 "max_cntlid": 65519, 00:20:05.948 "namespaces": [ 00:20:05.948 { 00:20:05.948 "nsid": 1, 00:20:05.948 "bdev_name": "Malloc0", 00:20:05.948 "name": "Malloc0", 00:20:05.948 "nguid": "0449CC00EA444E54819E148CC7F3D8D6", 00:20:05.948 "uuid": "0449cc00-ea44-4e54-819e-148cc7f3d8d6" 00:20:05.948 }, 00:20:05.948 { 00:20:05.948 "nsid": 2, 00:20:05.948 "bdev_name": "Malloc1", 00:20:05.948 "name": "Malloc1", 00:20:05.948 "nguid": "68AB16019EAD40138E413EEE733D04C9", 00:20:05.948 "uuid": "68ab1601-9ead-4013-8e41-3eee733d04c9" 00:20:05.948 } 00:20:05.948 ] 00:20:05.948 } 00:20:05.948 ] 00:20:05.948 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.948 16:31:14 -- host/aer.sh@43 -- # wait 516514 00:20:05.948 Asynchronous Event Request test 00:20:05.948 Attaching to 192.168.100.8 00:20:05.948 Attached to 192.168.100.8 00:20:05.948 Registering asynchronous event callbacks... 00:20:05.948 Starting namespace attribute notice tests for all controllers... 00:20:05.948 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:05.948 aer_cb - Changed Namespace 00:20:05.948 Cleaning up... 00:20:05.948 16:31:14 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:05.948 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.948 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.948 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.948 16:31:14 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:05.948 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.948 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.948 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.948 16:31:14 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.948 16:31:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.948 16:31:14 -- common/autotest_common.sh@10 -- # set +x 00:20:05.948 16:31:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.948 16:31:14 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:05.948 16:31:14 -- host/aer.sh@51 -- # nvmftestfini 00:20:05.948 16:31:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:05.948 16:31:14 -- nvmf/common.sh@117 -- # sync 00:20:05.948 16:31:14 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:05.948 16:31:14 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:05.948 16:31:14 -- nvmf/common.sh@120 -- # set +e 00:20:05.948 16:31:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.948 16:31:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:05.948 rmmod nvme_rdma 00:20:05.948 rmmod nvme_fabrics 00:20:05.948 16:31:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.207 16:31:14 -- nvmf/common.sh@124 -- # set -e 00:20:06.207 16:31:14 -- nvmf/common.sh@125 -- # return 0 00:20:06.207 16:31:14 -- nvmf/common.sh@478 -- # '[' -n 516429 ']' 00:20:06.207 16:31:14 -- nvmf/common.sh@479 -- # killprocess 516429 00:20:06.207 16:31:14 -- common/autotest_common.sh@936 -- # '[' -z 516429 ']' 00:20:06.207 16:31:14 -- common/autotest_common.sh@940 -- # kill -0 516429 00:20:06.207 16:31:14 -- common/autotest_common.sh@941 -- # uname 00:20:06.207 16:31:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.207 16:31:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 516429 00:20:06.207 16:31:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:06.207 16:31:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:06.207 16:31:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 516429' 00:20:06.207 killing process with pid 516429 00:20:06.207 16:31:15 -- common/autotest_common.sh@955 -- # kill 516429 00:20:06.207 [2024-04-26 16:31:15.025804] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:06.207 16:31:15 -- common/autotest_common.sh@960 -- # wait 516429 00:20:06.207 [2024-04-26 16:31:15.111464] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:06.466 16:31:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:06.466 16:31:15 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:06.466 00:20:06.466 real 0m8.064s 00:20:06.466 user 0m8.350s 00:20:06.466 sys 0m5.137s 00:20:06.466 16:31:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:06.466 16:31:15 -- common/autotest_common.sh@10 -- # set +x 00:20:06.466 ************************************ 00:20:06.466 END TEST nvmf_aer 00:20:06.466 ************************************ 00:20:06.466 16:31:15 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:20:06.466 16:31:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:06.466 16:31:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.466 16:31:15 -- common/autotest_common.sh@10 -- # set +x 00:20:06.726 ************************************ 00:20:06.726 START TEST nvmf_async_init 00:20:06.726 ************************************ 00:20:06.726 16:31:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:20:06.726 * Looking for test storage... 00:20:06.726 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:06.726 16:31:15 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:06.726 16:31:15 -- nvmf/common.sh@7 -- # uname -s 00:20:06.726 16:31:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.726 16:31:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.726 16:31:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.726 16:31:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.726 16:31:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.726 16:31:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.726 16:31:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.726 16:31:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.726 16:31:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.726 16:31:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.726 16:31:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:20:06.726 16:31:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:20:06.726 16:31:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.726 16:31:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.726 16:31:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:06.726 16:31:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:06.726 16:31:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:06.726 16:31:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.726 16:31:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.726 16:31:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.726 16:31:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.726 16:31:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.726 16:31:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.726 16:31:15 -- paths/export.sh@5 -- # export PATH 00:20:06.726 16:31:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.726 16:31:15 -- nvmf/common.sh@47 -- # : 0 00:20:06.726 16:31:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:06.726 16:31:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:06.726 16:31:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:06.726 16:31:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.726 16:31:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.726 16:31:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:06.726 16:31:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:06.726 16:31:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:06.726 16:31:15 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:06.726 16:31:15 -- host/async_init.sh@14 -- # null_block_size=512 00:20:06.726 16:31:15 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:06.726 16:31:15 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:06.726 16:31:15 -- host/async_init.sh@20 -- # uuidgen 00:20:06.726 16:31:15 -- host/async_init.sh@20 -- # tr -d - 00:20:06.726 16:31:15 -- host/async_init.sh@20 -- # nguid=9e8f59c98218439a8e8e9735bd6897e2 00:20:06.726 16:31:15 -- host/async_init.sh@22 -- # nvmftestinit 00:20:06.726 16:31:15 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:06.726 16:31:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.726 16:31:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:06.726 16:31:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:06.726 16:31:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:06.726 16:31:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.726 16:31:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.726 16:31:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.726 16:31:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:06.726 16:31:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:06.726 16:31:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:06.726 16:31:15 -- common/autotest_common.sh@10 -- # set +x 00:20:11.996 16:31:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:11.996 16:31:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:11.996 16:31:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:11.996 16:31:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:11.996 16:31:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:11.996 16:31:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:11.996 16:31:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:11.996 16:31:21 -- nvmf/common.sh@295 -- # net_devs=() 00:20:11.996 16:31:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:11.996 16:31:21 -- nvmf/common.sh@296 -- # e810=() 00:20:11.996 16:31:21 -- nvmf/common.sh@296 -- # local -ga e810 00:20:11.996 16:31:21 -- nvmf/common.sh@297 -- # x722=() 00:20:12.258 16:31:21 -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.258 16:31:21 -- nvmf/common.sh@298 -- # mlx=() 00:20:12.258 16:31:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.258 16:31:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.259 16:31:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.259 16:31:21 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:12.259 16:31:21 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:12.259 16:31:21 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:12.259 16:31:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.259 16:31:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:20:12.259 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:20:12.259 16:31:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.259 16:31:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:20:12.259 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:20:12.259 16:31:21 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:12.259 16:31:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.259 16:31:21 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.259 16:31:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:12.259 16:31:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.259 16:31:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:12.259 Found net devices under 0000:18:00.0: mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.259 16:31:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.259 16:31:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:12.259 16:31:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.259 16:31:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:12.259 Found net devices under 0000:18:00.1: mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.259 16:31:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:12.259 16:31:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:12.259 16:31:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:12.259 16:31:21 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:12.259 16:31:21 -- nvmf/common.sh@58 -- # uname 00:20:12.259 16:31:21 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:12.259 16:31:21 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:12.259 16:31:21 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:12.259 16:31:21 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:12.259 16:31:21 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:12.259 16:31:21 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:12.259 16:31:21 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:12.259 16:31:21 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:12.259 16:31:21 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:12.259 16:31:21 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:12.259 16:31:21 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:12.259 16:31:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.259 16:31:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:12.259 16:31:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:12.259 16:31:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.259 16:31:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:12.259 16:31:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@105 -- # continue 2 00:20:12.259 16:31:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@105 -- # continue 2 00:20:12.259 16:31:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:12.259 16:31:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:12.259 16:31:21 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:12.259 16:31:21 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:12.259 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.259 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:20:12.259 altname enp24s0f0np0 00:20:12.259 altname ens785f0np0 00:20:12.259 inet 192.168.100.8/24 scope global mlx_0_0 00:20:12.259 valid_lft forever preferred_lft forever 00:20:12.259 16:31:21 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:12.259 16:31:21 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:12.259 16:31:21 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:12.259 16:31:21 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:12.259 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:12.259 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:20:12.259 altname enp24s0f1np1 00:20:12.259 altname ens785f1np1 00:20:12.259 inet 192.168.100.9/24 scope global mlx_0_1 00:20:12.259 valid_lft forever preferred_lft forever 00:20:12.259 16:31:21 -- nvmf/common.sh@411 -- # return 0 00:20:12.259 16:31:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:12.259 16:31:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:12.259 16:31:21 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:12.259 16:31:21 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:12.259 16:31:21 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:12.259 16:31:21 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:12.259 16:31:21 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:12.259 16:31:21 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:12.259 16:31:21 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:12.259 16:31:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@105 -- # continue 2 00:20:12.259 16:31:21 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:12.259 16:31:21 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:12.259 16:31:21 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@105 -- # continue 2 00:20:12.259 16:31:21 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:12.259 16:31:21 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:12.259 16:31:21 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:12.259 16:31:21 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:12.259 16:31:21 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:12.259 16:31:21 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:12.259 192.168.100.9' 00:20:12.259 16:31:21 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:12.259 192.168.100.9' 00:20:12.259 16:31:21 -- nvmf/common.sh@446 -- # head -n 1 00:20:12.259 16:31:21 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:12.259 16:31:21 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:12.259 192.168.100.9' 00:20:12.260 16:31:21 -- nvmf/common.sh@447 -- # head -n 1 00:20:12.260 16:31:21 -- nvmf/common.sh@447 -- # tail -n +2 00:20:12.260 16:31:21 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:12.260 16:31:21 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:12.260 16:31:21 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:12.260 16:31:21 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:12.260 16:31:21 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:12.260 16:31:21 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:12.541 16:31:21 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:12.542 16:31:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:12.542 16:31:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:12.542 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.542 16:31:21 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:12.542 16:31:21 -- nvmf/common.sh@470 -- # nvmfpid=519406 00:20:12.542 16:31:21 -- nvmf/common.sh@471 -- # waitforlisten 519406 00:20:12.542 16:31:21 -- common/autotest_common.sh@817 -- # '[' -z 519406 ']' 00:20:12.542 16:31:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.542 16:31:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:12.542 16:31:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.542 16:31:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:12.542 16:31:21 -- common/autotest_common.sh@10 -- # set +x 00:20:12.542 [2024-04-26 16:31:21.321860] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:12.542 [2024-04-26 16:31:21.321918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.542 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.542 [2024-04-26 16:31:21.392107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.542 [2024-04-26 16:31:21.477120] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.542 [2024-04-26 16:31:21.477162] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.542 [2024-04-26 16:31:21.477171] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.542 [2024-04-26 16:31:21.477180] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.542 [2024-04-26 16:31:21.477187] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.542 [2024-04-26 16:31:21.477209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.207 16:31:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:13.207 16:31:22 -- common/autotest_common.sh@850 -- # return 0 00:20:13.207 16:31:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:13.207 16:31:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:13.207 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.207 16:31:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.207 16:31:22 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:13.207 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.207 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.207 [2024-04-26 16:31:22.186115] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11c4170/0x11c8660) succeed. 00:20:13.207 [2024-04-26 16:31:22.195283] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11c5670/0x1209cf0) succeed. 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 null0 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9e8f59c98218439a8e8e9735bd6897e2 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 [2024-04-26 16:31:22.281647] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 nvme0n1 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 [ 00:20:13.556 { 00:20:13.556 "name": "nvme0n1", 00:20:13.556 "aliases": [ 00:20:13.556 "9e8f59c9-8218-439a-8e8e-9735bd6897e2" 00:20:13.556 ], 00:20:13.556 "product_name": "NVMe disk", 00:20:13.556 "block_size": 512, 00:20:13.556 "num_blocks": 2097152, 00:20:13.556 "uuid": "9e8f59c9-8218-439a-8e8e-9735bd6897e2", 00:20:13.556 "assigned_rate_limits": { 00:20:13.556 "rw_ios_per_sec": 0, 00:20:13.556 "rw_mbytes_per_sec": 0, 00:20:13.556 "r_mbytes_per_sec": 0, 00:20:13.556 "w_mbytes_per_sec": 0 00:20:13.556 }, 00:20:13.556 "claimed": false, 00:20:13.556 "zoned": false, 00:20:13.556 "supported_io_types": { 00:20:13.556 "read": true, 00:20:13.556 "write": true, 00:20:13.556 "unmap": false, 00:20:13.556 "write_zeroes": true, 00:20:13.556 "flush": true, 00:20:13.556 "reset": true, 00:20:13.556 "compare": true, 00:20:13.556 "compare_and_write": true, 00:20:13.556 "abort": true, 00:20:13.556 "nvme_admin": true, 00:20:13.556 "nvme_io": true 00:20:13.556 }, 00:20:13.556 "memory_domains": [ 00:20:13.556 { 00:20:13.556 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:13.556 "dma_device_type": 0 00:20:13.556 } 00:20:13.556 ], 00:20:13.556 "driver_specific": { 00:20:13.556 "nvme": [ 00:20:13.556 { 00:20:13.556 "trid": { 00:20:13.556 "trtype": "RDMA", 00:20:13.556 "adrfam": "IPv4", 00:20:13.556 "traddr": "192.168.100.8", 00:20:13.556 "trsvcid": "4420", 00:20:13.556 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:13.556 }, 00:20:13.556 "ctrlr_data": { 00:20:13.556 "cntlid": 1, 00:20:13.556 "vendor_id": "0x8086", 00:20:13.556 "model_number": "SPDK bdev Controller", 00:20:13.556 "serial_number": "00000000000000000000", 00:20:13.556 "firmware_revision": "24.05", 00:20:13.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.556 "oacs": { 00:20:13.556 "security": 0, 00:20:13.556 "format": 0, 00:20:13.556 "firmware": 0, 00:20:13.556 "ns_manage": 0 00:20:13.556 }, 00:20:13.556 "multi_ctrlr": true, 00:20:13.556 "ana_reporting": false 00:20:13.556 }, 00:20:13.556 "vs": { 00:20:13.556 "nvme_version": "1.3" 00:20:13.556 }, 00:20:13.556 "ns_data": { 00:20:13.556 "id": 1, 00:20:13.556 "can_share": true 00:20:13.556 } 00:20:13.556 } 00:20:13.556 ], 00:20:13.556 "mp_policy": "active_passive" 00:20:13.556 } 00:20:13.556 } 00:20:13.556 ] 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 [2024-04-26 16:31:22.386035] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:13.556 [2024-04-26 16:31:22.404072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:13.556 [2024-04-26 16:31:22.425015] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:13.556 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.556 16:31:22 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:13.556 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.556 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.556 [ 00:20:13.556 { 00:20:13.556 "name": "nvme0n1", 00:20:13.556 "aliases": [ 00:20:13.556 "9e8f59c9-8218-439a-8e8e-9735bd6897e2" 00:20:13.556 ], 00:20:13.557 "product_name": "NVMe disk", 00:20:13.557 "block_size": 512, 00:20:13.557 "num_blocks": 2097152, 00:20:13.557 "uuid": "9e8f59c9-8218-439a-8e8e-9735bd6897e2", 00:20:13.557 "assigned_rate_limits": { 00:20:13.557 "rw_ios_per_sec": 0, 00:20:13.557 "rw_mbytes_per_sec": 0, 00:20:13.557 "r_mbytes_per_sec": 0, 00:20:13.557 "w_mbytes_per_sec": 0 00:20:13.557 }, 00:20:13.557 "claimed": false, 00:20:13.557 "zoned": false, 00:20:13.557 "supported_io_types": { 00:20:13.557 "read": true, 00:20:13.557 "write": true, 00:20:13.557 "unmap": false, 00:20:13.557 "write_zeroes": true, 00:20:13.557 "flush": true, 00:20:13.557 "reset": true, 00:20:13.557 "compare": true, 00:20:13.557 "compare_and_write": true, 00:20:13.557 "abort": true, 00:20:13.557 "nvme_admin": true, 00:20:13.557 "nvme_io": true 00:20:13.557 }, 00:20:13.557 "memory_domains": [ 00:20:13.557 { 00:20:13.557 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:13.557 "dma_device_type": 0 00:20:13.557 } 00:20:13.557 ], 00:20:13.557 "driver_specific": { 00:20:13.557 "nvme": [ 00:20:13.557 { 00:20:13.557 "trid": { 00:20:13.557 "trtype": "RDMA", 00:20:13.557 "adrfam": "IPv4", 00:20:13.557 "traddr": "192.168.100.8", 00:20:13.557 "trsvcid": "4420", 00:20:13.557 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:13.557 }, 00:20:13.557 "ctrlr_data": { 00:20:13.557 "cntlid": 2, 00:20:13.557 "vendor_id": "0x8086", 00:20:13.557 "model_number": "SPDK bdev Controller", 00:20:13.557 "serial_number": "00000000000000000000", 00:20:13.557 "firmware_revision": "24.05", 00:20:13.557 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.557 "oacs": { 00:20:13.557 "security": 0, 00:20:13.557 "format": 0, 00:20:13.557 "firmware": 0, 00:20:13.557 "ns_manage": 0 00:20:13.557 }, 00:20:13.557 "multi_ctrlr": true, 00:20:13.557 "ana_reporting": false 00:20:13.557 }, 00:20:13.557 "vs": { 00:20:13.557 "nvme_version": "1.3" 00:20:13.557 }, 00:20:13.557 "ns_data": { 00:20:13.557 "id": 1, 00:20:13.557 "can_share": true 00:20:13.557 } 00:20:13.557 } 00:20:13.557 ], 00:20:13.557 "mp_policy": "active_passive" 00:20:13.557 } 00:20:13.557 } 00:20:13.557 ] 00:20:13.557 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.557 16:31:22 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.557 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.557 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.557 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.557 16:31:22 -- host/async_init.sh@53 -- # mktemp 00:20:13.557 16:31:22 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.aHqfgZgvqz 00:20:13.557 16:31:22 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:13.557 16:31:22 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.aHqfgZgvqz 00:20:13.557 16:31:22 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:13.557 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.557 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.557 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.557 16:31:22 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:20:13.557 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.557 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.557 [2024-04-26 16:31:22.499406] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:13.557 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.557 16:31:22 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aHqfgZgvqz 00:20:13.557 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.557 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.557 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.557 16:31:22 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aHqfgZgvqz 00:20:13.557 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.557 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.557 [2024-04-26 16:31:22.515428] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.886 nvme0n1 00:20:13.886 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.886 16:31:22 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:13.886 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.886 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.886 [ 00:20:13.886 { 00:20:13.886 "name": "nvme0n1", 00:20:13.886 "aliases": [ 00:20:13.886 "9e8f59c9-8218-439a-8e8e-9735bd6897e2" 00:20:13.886 ], 00:20:13.886 "product_name": "NVMe disk", 00:20:13.886 "block_size": 512, 00:20:13.886 "num_blocks": 2097152, 00:20:13.886 "uuid": "9e8f59c9-8218-439a-8e8e-9735bd6897e2", 00:20:13.886 "assigned_rate_limits": { 00:20:13.886 "rw_ios_per_sec": 0, 00:20:13.886 "rw_mbytes_per_sec": 0, 00:20:13.886 "r_mbytes_per_sec": 0, 00:20:13.886 "w_mbytes_per_sec": 0 00:20:13.886 }, 00:20:13.886 "claimed": false, 00:20:13.886 "zoned": false, 00:20:13.886 "supported_io_types": { 00:20:13.886 "read": true, 00:20:13.886 "write": true, 00:20:13.886 "unmap": false, 00:20:13.886 "write_zeroes": true, 00:20:13.886 "flush": true, 00:20:13.886 "reset": true, 00:20:13.886 "compare": true, 00:20:13.886 "compare_and_write": true, 00:20:13.886 "abort": true, 00:20:13.886 "nvme_admin": true, 00:20:13.886 "nvme_io": true 00:20:13.886 }, 00:20:13.886 "memory_domains": [ 00:20:13.886 { 00:20:13.886 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:13.886 "dma_device_type": 0 00:20:13.886 } 00:20:13.886 ], 00:20:13.886 "driver_specific": { 00:20:13.886 "nvme": [ 00:20:13.886 { 00:20:13.886 "trid": { 00:20:13.886 "trtype": "RDMA", 00:20:13.886 "adrfam": "IPv4", 00:20:13.886 "traddr": "192.168.100.8", 00:20:13.886 "trsvcid": "4421", 00:20:13.886 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:13.886 }, 00:20:13.886 "ctrlr_data": { 00:20:13.886 "cntlid": 3, 00:20:13.886 "vendor_id": "0x8086", 00:20:13.886 "model_number": "SPDK bdev Controller", 00:20:13.886 "serial_number": "00000000000000000000", 00:20:13.886 "firmware_revision": "24.05", 00:20:13.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:13.886 "oacs": { 00:20:13.886 "security": 0, 00:20:13.886 "format": 0, 00:20:13.886 "firmware": 0, 00:20:13.886 "ns_manage": 0 00:20:13.886 }, 00:20:13.886 "multi_ctrlr": true, 00:20:13.886 "ana_reporting": false 00:20:13.886 }, 00:20:13.886 "vs": { 00:20:13.886 "nvme_version": "1.3" 00:20:13.886 }, 00:20:13.886 "ns_data": { 00:20:13.886 "id": 1, 00:20:13.886 "can_share": true 00:20:13.886 } 00:20:13.886 } 00:20:13.886 ], 00:20:13.886 "mp_policy": "active_passive" 00:20:13.886 } 00:20:13.886 } 00:20:13.886 ] 00:20:13.886 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.886 16:31:22 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:13.886 16:31:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.886 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:13.886 16:31:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.886 16:31:22 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.aHqfgZgvqz 00:20:13.886 16:31:22 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:13.886 16:31:22 -- host/async_init.sh@78 -- # nvmftestfini 00:20:13.886 16:31:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:13.886 16:31:22 -- nvmf/common.sh@117 -- # sync 00:20:13.886 16:31:22 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:13.886 16:31:22 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:13.886 16:31:22 -- nvmf/common.sh@120 -- # set +e 00:20:13.886 16:31:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.886 16:31:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:13.886 rmmod nvme_rdma 00:20:13.886 rmmod nvme_fabrics 00:20:13.886 16:31:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.886 16:31:22 -- nvmf/common.sh@124 -- # set -e 00:20:13.886 16:31:22 -- nvmf/common.sh@125 -- # return 0 00:20:13.886 16:31:22 -- nvmf/common.sh@478 -- # '[' -n 519406 ']' 00:20:13.886 16:31:22 -- nvmf/common.sh@479 -- # killprocess 519406 00:20:13.886 16:31:22 -- common/autotest_common.sh@936 -- # '[' -z 519406 ']' 00:20:13.886 16:31:22 -- common/autotest_common.sh@940 -- # kill -0 519406 00:20:13.886 16:31:22 -- common/autotest_common.sh@941 -- # uname 00:20:13.886 16:31:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:13.886 16:31:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 519406 00:20:13.886 16:31:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:13.886 16:31:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:13.886 16:31:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 519406' 00:20:13.886 killing process with pid 519406 00:20:13.886 16:31:22 -- common/autotest_common.sh@955 -- # kill 519406 00:20:13.886 16:31:22 -- common/autotest_common.sh@960 -- # wait 519406 00:20:13.886 [2024-04-26 16:31:22.771106] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:14.223 16:31:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:14.223 16:31:22 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:14.223 00:20:14.223 real 0m7.424s 00:20:14.223 user 0m3.329s 00:20:14.223 sys 0m4.674s 00:20:14.223 16:31:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:14.223 16:31:22 -- common/autotest_common.sh@10 -- # set +x 00:20:14.223 ************************************ 00:20:14.223 END TEST nvmf_async_init 00:20:14.223 ************************************ 00:20:14.223 16:31:23 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:20:14.223 16:31:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:14.223 16:31:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:14.223 16:31:23 -- common/autotest_common.sh@10 -- # set +x 00:20:14.223 ************************************ 00:20:14.223 START TEST dma 00:20:14.223 ************************************ 00:20:14.223 16:31:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:20:14.499 * Looking for test storage... 00:20:14.499 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:14.499 16:31:23 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.499 16:31:23 -- nvmf/common.sh@7 -- # uname -s 00:20:14.499 16:31:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.499 16:31:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.499 16:31:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.499 16:31:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.499 16:31:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.499 16:31:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.499 16:31:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.499 16:31:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.499 16:31:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.499 16:31:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.499 16:31:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:20:14.499 16:31:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:20:14.499 16:31:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.499 16:31:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.499 16:31:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.499 16:31:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.499 16:31:23 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:14.499 16:31:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.499 16:31:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.499 16:31:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.499 16:31:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.499 16:31:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.499 16:31:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.499 16:31:23 -- paths/export.sh@5 -- # export PATH 00:20:14.499 16:31:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.499 16:31:23 -- nvmf/common.sh@47 -- # : 0 00:20:14.499 16:31:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.499 16:31:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.499 16:31:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.499 16:31:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.499 16:31:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.499 16:31:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.499 16:31:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.499 16:31:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.499 16:31:23 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:20:14.499 16:31:23 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:20:14.499 16:31:23 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:20:14.499 16:31:23 -- host/dma.sh@18 -- # subsystem=0 00:20:14.499 16:31:23 -- host/dma.sh@93 -- # nvmftestinit 00:20:14.499 16:31:23 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:14.499 16:31:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.499 16:31:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:14.499 16:31:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:14.499 16:31:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:14.499 16:31:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.499 16:31:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:14.499 16:31:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.499 16:31:23 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:14.499 16:31:23 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:14.499 16:31:23 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.499 16:31:23 -- common/autotest_common.sh@10 -- # set +x 00:20:21.072 16:31:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:21.072 16:31:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.072 16:31:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.072 16:31:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.072 16:31:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.072 16:31:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.072 16:31:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.072 16:31:29 -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.072 16:31:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.072 16:31:29 -- nvmf/common.sh@296 -- # e810=() 00:20:21.072 16:31:29 -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.072 16:31:29 -- nvmf/common.sh@297 -- # x722=() 00:20:21.072 16:31:29 -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.072 16:31:29 -- nvmf/common.sh@298 -- # mlx=() 00:20:21.072 16:31:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.072 16:31:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.072 16:31:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.072 16:31:29 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:21.072 16:31:29 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:21.072 16:31:29 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:21.072 16:31:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.072 16:31:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:20:21.072 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:20:21.072 16:31:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:21.072 16:31:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:20:21.072 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:20:21.072 16:31:29 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:21.072 16:31:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.072 16:31:29 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.072 16:31:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:21.072 16:31:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.072 16:31:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:21.072 Found net devices under 0000:18:00.0: mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.072 16:31:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.072 16:31:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:21.072 16:31:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.072 16:31:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:21.072 Found net devices under 0000:18:00.1: mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.072 16:31:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:21.072 16:31:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:21.072 16:31:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:21.072 16:31:29 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:21.072 16:31:29 -- nvmf/common.sh@58 -- # uname 00:20:21.072 16:31:29 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:21.072 16:31:29 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:21.072 16:31:29 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:21.072 16:31:29 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:21.072 16:31:29 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:21.072 16:31:29 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:21.072 16:31:29 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:21.072 16:31:29 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:21.072 16:31:29 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:21.072 16:31:29 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:21.072 16:31:29 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:21.072 16:31:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:21.072 16:31:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:21.072 16:31:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:21.072 16:31:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:21.072 16:31:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:21.072 16:31:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@105 -- # continue 2 00:20:21.072 16:31:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@105 -- # continue 2 00:20:21.072 16:31:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:21.072 16:31:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:21.072 16:31:29 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:21.072 16:31:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:21.072 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:21.072 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:20:21.072 altname enp24s0f0np0 00:20:21.072 altname ens785f0np0 00:20:21.072 inet 192.168.100.8/24 scope global mlx_0_0 00:20:21.072 valid_lft forever preferred_lft forever 00:20:21.072 16:31:29 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:21.072 16:31:29 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:21.072 16:31:29 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:21.072 16:31:29 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:21.072 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:21.072 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:20:21.072 altname enp24s0f1np1 00:20:21.072 altname ens785f1np1 00:20:21.072 inet 192.168.100.9/24 scope global mlx_0_1 00:20:21.072 valid_lft forever preferred_lft forever 00:20:21.072 16:31:29 -- nvmf/common.sh@411 -- # return 0 00:20:21.072 16:31:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:21.072 16:31:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:21.072 16:31:29 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:21.072 16:31:29 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:21.072 16:31:29 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:21.072 16:31:29 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:21.072 16:31:29 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:21.072 16:31:29 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:21.072 16:31:29 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:21.072 16:31:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@105 -- # continue 2 00:20:21.072 16:31:29 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:21.072 16:31:29 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:21.072 16:31:29 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@105 -- # continue 2 00:20:21.072 16:31:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:21.072 16:31:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:21.072 16:31:29 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:21.072 16:31:29 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:21.072 16:31:29 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:21.072 16:31:29 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:21.072 192.168.100.9' 00:20:21.072 16:31:29 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:21.072 192.168.100.9' 00:20:21.072 16:31:29 -- nvmf/common.sh@446 -- # head -n 1 00:20:21.072 16:31:29 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:21.072 16:31:29 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:21.072 192.168.100.9' 00:20:21.072 16:31:29 -- nvmf/common.sh@447 -- # tail -n +2 00:20:21.072 16:31:29 -- nvmf/common.sh@447 -- # head -n 1 00:20:21.073 16:31:29 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:21.073 16:31:29 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:21.073 16:31:29 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:21.073 16:31:29 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:21.073 16:31:29 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:21.073 16:31:29 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:21.073 16:31:29 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:20:21.073 16:31:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:21.073 16:31:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:21.073 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:20:21.073 16:31:29 -- nvmf/common.sh@470 -- # nvmfpid=522690 00:20:21.073 16:31:29 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:21.073 16:31:29 -- nvmf/common.sh@471 -- # waitforlisten 522690 00:20:21.073 16:31:29 -- common/autotest_common.sh@817 -- # '[' -z 522690 ']' 00:20:21.073 16:31:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.073 16:31:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:21.073 16:31:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.073 16:31:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:21.073 16:31:29 -- common/autotest_common.sh@10 -- # set +x 00:20:21.073 [2024-04-26 16:31:29.961069] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:21.073 [2024-04-26 16:31:29.961128] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.073 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.073 [2024-04-26 16:31:30.034652] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:21.332 [2024-04-26 16:31:30.120161] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.332 [2024-04-26 16:31:30.120205] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.332 [2024-04-26 16:31:30.120215] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.332 [2024-04-26 16:31:30.120224] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.332 [2024-04-26 16:31:30.120231] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.332 [2024-04-26 16:31:30.120294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.332 [2024-04-26 16:31:30.120297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.901 16:31:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:21.901 16:31:30 -- common/autotest_common.sh@850 -- # return 0 00:20:21.901 16:31:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:21.901 16:31:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:21.901 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.901 16:31:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.901 16:31:30 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:21.901 16:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.901 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:20:21.901 [2024-04-26 16:31:30.838542] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc9c90/0xdce180) succeed. 00:20:21.901 [2024-04-26 16:31:30.847479] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdcb190/0xe0f810) succeed. 00:20:22.160 16:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.160 16:31:30 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:20:22.161 16:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.161 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.161 Malloc0 00:20:22.161 16:31:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.161 16:31:30 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:22.161 16:31:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.161 16:31:30 -- common/autotest_common.sh@10 -- # set +x 00:20:22.161 16:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.161 16:31:31 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:22.161 16:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.161 16:31:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.161 16:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.161 16:31:31 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:22.161 16:31:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:22.161 16:31:31 -- common/autotest_common.sh@10 -- # set +x 00:20:22.161 [2024-04-26 16:31:31.013784] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:22.161 16:31:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:22.161 16:31:31 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:20:22.161 16:31:31 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:20:22.161 16:31:31 -- nvmf/common.sh@521 -- # config=() 00:20:22.161 16:31:31 -- nvmf/common.sh@521 -- # local subsystem config 00:20:22.161 16:31:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:22.161 16:31:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:22.161 { 00:20:22.161 "params": { 00:20:22.161 "name": "Nvme$subsystem", 00:20:22.161 "trtype": "$TEST_TRANSPORT", 00:20:22.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.161 "adrfam": "ipv4", 00:20:22.161 "trsvcid": "$NVMF_PORT", 00:20:22.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.161 "hdgst": ${hdgst:-false}, 00:20:22.161 "ddgst": ${ddgst:-false} 00:20:22.161 }, 00:20:22.161 "method": "bdev_nvme_attach_controller" 00:20:22.161 } 00:20:22.161 EOF 00:20:22.161 )") 00:20:22.161 16:31:31 -- nvmf/common.sh@543 -- # cat 00:20:22.161 16:31:31 -- nvmf/common.sh@545 -- # jq . 00:20:22.161 16:31:31 -- nvmf/common.sh@546 -- # IFS=, 00:20:22.161 16:31:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:22.161 "params": { 00:20:22.161 "name": "Nvme0", 00:20:22.161 "trtype": "rdma", 00:20:22.161 "traddr": "192.168.100.8", 00:20:22.161 "adrfam": "ipv4", 00:20:22.161 "trsvcid": "4420", 00:20:22.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:22.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:22.161 "hdgst": false, 00:20:22.161 "ddgst": false 00:20:22.161 }, 00:20:22.161 "method": "bdev_nvme_attach_controller" 00:20:22.161 }' 00:20:22.161 [2024-04-26 16:31:31.064950] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:22.161 [2024-04-26 16:31:31.065012] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid522767 ] 00:20:22.161 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.161 [2024-04-26 16:31:31.135176] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:22.420 [2024-04-26 16:31:31.215611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.420 [2024-04-26 16:31:31.215614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.714 bdev Nvme0n1 reports 1 memory domains 00:20:27.714 bdev Nvme0n1 supports RDMA memory domain 00:20:27.714 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:27.714 ========================================================================== 00:20:27.714 Latency [us] 00:20:27.714 IOPS MiB/s Average min max 00:20:27.714 Core 2: 21749.46 84.96 734.95 243.12 8850.12 00:20:27.714 Core 3: 21833.65 85.29 732.07 241.40 8670.09 00:20:27.714 ========================================================================== 00:20:27.714 Total : 43583.11 170.25 733.51 241.40 8850.12 00:20:27.714 00:20:27.714 Total operations: 217947, translate 217947 pull_push 0 memzero 0 00:20:27.714 16:31:36 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:20:27.714 16:31:36 -- host/dma.sh@107 -- # gen_malloc_json 00:20:27.714 16:31:36 -- host/dma.sh@21 -- # jq . 00:20:27.714 [2024-04-26 16:31:36.689986] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:27.714 [2024-04-26 16:31:36.690041] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid523513 ] 00:20:27.714 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.972 [2024-04-26 16:31:36.759375] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:27.972 [2024-04-26 16:31:36.835239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.972 [2024-04-26 16:31:36.835242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.243 bdev Malloc0 reports 2 memory domains 00:20:33.243 bdev Malloc0 doesn't support RDMA memory domain 00:20:33.243 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:33.243 ========================================================================== 00:20:33.243 Latency [us] 00:20:33.243 IOPS MiB/s Average min max 00:20:33.243 Core 2: 14473.02 56.54 1104.77 342.39 1808.49 00:20:33.243 Core 3: 14706.33 57.45 1087.21 382.99 2094.21 00:20:33.243 ========================================================================== 00:20:33.243 Total : 29179.35 113.98 1095.92 342.39 2094.21 00:20:33.243 00:20:33.243 Total operations: 145955, translate 0 pull_push 583820 memzero 0 00:20:33.243 16:31:42 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:20:33.243 16:31:42 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:20:33.243 16:31:42 -- host/dma.sh@48 -- # local subsystem=0 00:20:33.243 16:31:42 -- host/dma.sh@50 -- # jq . 00:20:33.243 Ignoring -M option 00:20:33.243 [2024-04-26 16:31:42.213245] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:33.243 [2024-04-26 16:31:42.213298] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524220 ] 00:20:33.243 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.502 [2024-04-26 16:31:42.281898] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:33.502 [2024-04-26 16:31:42.358641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.502 [2024-04-26 16:31:42.358644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.761 [2024-04-26 16:31:42.584646] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:20:39.034 [2024-04-26 16:31:47.612647] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:20:39.034 bdev 559d89c8-ecd8-47ff-8495-acc7638d2239 reports 1 memory domains 00:20:39.034 bdev 559d89c8-ecd8-47ff-8495-acc7638d2239 supports RDMA memory domain 00:20:39.034 Initialization complete, running randread IO for 5 sec on 2 cores 00:20:39.034 ========================================================================== 00:20:39.034 Latency [us] 00:20:39.034 IOPS MiB/s Average min max 00:20:39.034 Core 2: 80554.19 314.66 197.90 77.47 2413.50 00:20:39.034 Core 3: 81774.06 319.43 194.95 77.93 2534.73 00:20:39.034 ========================================================================== 00:20:39.034 Total : 162328.25 634.09 196.42 77.47 2534.73 00:20:39.034 00:20:39.034 Total operations: 811732, translate 0 pull_push 0 memzero 811732 00:20:39.034 16:31:47 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:20:39.034 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.034 [2024-04-26 16:31:47.967056] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:41.571 Initializing NVMe Controllers 00:20:41.571 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:20:41.571 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:20:41.571 Initialization complete. Launching workers. 00:20:41.571 ======================================================== 00:20:41.571 Latency(us) 00:20:41.571 Device Information : IOPS MiB/s Average min max 00:20:41.571 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.56 6980.96 7997.52 00:20:41.571 ======================================================== 00:20:41.571 Total : 2016.00 7.88 7972.56 6980.96 7997.52 00:20:41.571 00:20:41.571 16:31:50 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:20:41.571 16:31:50 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:20:41.571 16:31:50 -- host/dma.sh@48 -- # local subsystem=0 00:20:41.571 16:31:50 -- host/dma.sh@50 -- # jq . 00:20:41.571 [2024-04-26 16:31:50.303984] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:41.571 [2024-04-26 16:31:50.304041] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525299 ] 00:20:41.571 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.571 [2024-04-26 16:31:50.370715] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:41.571 [2024-04-26 16:31:50.447816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.571 [2024-04-26 16:31:50.447819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.830 [2024-04-26 16:31:50.656283] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:20:47.116 [2024-04-26 16:31:55.686914] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:20:47.116 bdev 2f6fc6d7-9fe4-4a0c-81b1-d4097457d722 reports 1 memory domains 00:20:47.116 bdev 2f6fc6d7-9fe4-4a0c-81b1-d4097457d722 supports RDMA memory domain 00:20:47.116 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:47.116 ========================================================================== 00:20:47.116 Latency [us] 00:20:47.116 IOPS MiB/s Average min max 00:20:47.116 Core 2: 19165.37 74.86 834.00 18.64 9923.97 00:20:47.116 Core 3: 19428.49 75.89 822.80 13.93 9562.36 00:20:47.116 ========================================================================== 00:20:47.116 Total : 38593.85 150.76 828.37 13.93 9923.97 00:20:47.116 00:20:47.116 Total operations: 193028, translate 192921 pull_push 0 memzero 107 00:20:47.116 16:31:55 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:20:47.116 16:31:55 -- host/dma.sh@120 -- # nvmftestfini 00:20:47.116 16:31:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:47.116 16:31:55 -- nvmf/common.sh@117 -- # sync 00:20:47.116 16:31:55 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:47.116 16:31:55 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:47.116 16:31:55 -- nvmf/common.sh@120 -- # set +e 00:20:47.116 16:31:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.116 16:31:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:47.116 rmmod nvme_rdma 00:20:47.116 rmmod nvme_fabrics 00:20:47.116 16:31:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.116 16:31:55 -- nvmf/common.sh@124 -- # set -e 00:20:47.116 16:31:55 -- nvmf/common.sh@125 -- # return 0 00:20:47.116 16:31:55 -- nvmf/common.sh@478 -- # '[' -n 522690 ']' 00:20:47.116 16:31:55 -- nvmf/common.sh@479 -- # killprocess 522690 00:20:47.116 16:31:55 -- common/autotest_common.sh@936 -- # '[' -z 522690 ']' 00:20:47.116 16:31:55 -- common/autotest_common.sh@940 -- # kill -0 522690 00:20:47.116 16:31:55 -- common/autotest_common.sh@941 -- # uname 00:20:47.116 16:31:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:47.116 16:31:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 522690 00:20:47.116 16:31:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:47.116 16:31:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:47.116 16:31:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 522690' 00:20:47.116 killing process with pid 522690 00:20:47.116 16:31:56 -- common/autotest_common.sh@955 -- # kill 522690 00:20:47.116 16:31:56 -- common/autotest_common.sh@960 -- # wait 522690 00:20:47.116 [2024-04-26 16:31:56.086323] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:47.684 16:31:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:47.684 16:31:56 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:47.684 00:20:47.684 real 0m33.257s 00:20:47.684 user 1m37.413s 00:20:47.684 sys 0m6.316s 00:20:47.684 16:31:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:47.684 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:20:47.684 ************************************ 00:20:47.684 END TEST dma 00:20:47.684 ************************************ 00:20:47.684 16:31:56 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:47.684 16:31:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:47.684 16:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:47.684 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:20:47.685 ************************************ 00:20:47.685 START TEST nvmf_identify 00:20:47.685 ************************************ 00:20:47.685 16:31:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:47.685 * Looking for test storage... 00:20:47.685 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:47.685 16:31:56 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.685 16:31:56 -- nvmf/common.sh@7 -- # uname -s 00:20:47.943 16:31:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.943 16:31:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.943 16:31:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.943 16:31:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.943 16:31:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.943 16:31:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.943 16:31:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.943 16:31:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.943 16:31:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.943 16:31:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.943 16:31:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:20:47.943 16:31:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:20:47.943 16:31:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.943 16:31:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.943 16:31:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.943 16:31:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.943 16:31:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:47.943 16:31:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.943 16:31:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.943 16:31:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.943 16:31:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.943 16:31:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.943 16:31:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.943 16:31:56 -- paths/export.sh@5 -- # export PATH 00:20:47.943 16:31:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.943 16:31:56 -- nvmf/common.sh@47 -- # : 0 00:20:47.943 16:31:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:47.943 16:31:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:47.943 16:31:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.943 16:31:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.943 16:31:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.943 16:31:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:47.943 16:31:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:47.943 16:31:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:47.943 16:31:56 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:47.943 16:31:56 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:47.943 16:31:56 -- host/identify.sh@14 -- # nvmftestinit 00:20:47.943 16:31:56 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:47.943 16:31:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.944 16:31:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:47.944 16:31:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:47.944 16:31:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:47.944 16:31:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.944 16:31:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:47.944 16:31:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.944 16:31:56 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:47.944 16:31:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:47.944 16:31:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:47.944 16:31:56 -- common/autotest_common.sh@10 -- # set +x 00:20:54.510 16:32:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:54.510 16:32:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:54.510 16:32:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:54.510 16:32:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:54.510 16:32:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:54.510 16:32:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:54.510 16:32:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:54.510 16:32:02 -- nvmf/common.sh@295 -- # net_devs=() 00:20:54.510 16:32:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:54.510 16:32:02 -- nvmf/common.sh@296 -- # e810=() 00:20:54.510 16:32:02 -- nvmf/common.sh@296 -- # local -ga e810 00:20:54.510 16:32:02 -- nvmf/common.sh@297 -- # x722=() 00:20:54.510 16:32:02 -- nvmf/common.sh@297 -- # local -ga x722 00:20:54.510 16:32:02 -- nvmf/common.sh@298 -- # mlx=() 00:20:54.510 16:32:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:54.510 16:32:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.510 16:32:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.510 16:32:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.510 16:32:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.510 16:32:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.511 16:32:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.511 16:32:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.511 16:32:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.511 16:32:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.511 16:32:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.511 16:32:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.511 16:32:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:54.511 16:32:02 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:54.511 16:32:02 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:54.511 16:32:02 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:54.511 16:32:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:54.511 16:32:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:20:54.511 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:20:54.511 16:32:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:54.511 16:32:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:20:54.511 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:20:54.511 16:32:02 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:54.511 16:32:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:54.511 16:32:02 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.511 16:32:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:54.511 16:32:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.511 16:32:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:54.511 Found net devices under 0000:18:00.0: mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.511 16:32:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.511 16:32:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:54.511 16:32:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.511 16:32:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:54.511 Found net devices under 0000:18:00.1: mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.511 16:32:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:54.511 16:32:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:54.511 16:32:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@409 -- # rdma_device_init 00:20:54.511 16:32:02 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:20:54.511 16:32:02 -- nvmf/common.sh@58 -- # uname 00:20:54.511 16:32:02 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:54.511 16:32:02 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:54.511 16:32:02 -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:54.511 16:32:02 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:54.511 16:32:02 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:54.511 16:32:02 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:54.511 16:32:02 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:54.511 16:32:02 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:54.511 16:32:02 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:20:54.511 16:32:02 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:54.511 16:32:02 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:54.511 16:32:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:54.511 16:32:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:54.511 16:32:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:54.511 16:32:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:54.511 16:32:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:54.511 16:32:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@105 -- # continue 2 00:20:54.511 16:32:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@105 -- # continue 2 00:20:54.511 16:32:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:54.511 16:32:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:54.511 16:32:02 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:54.511 16:32:02 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:54.511 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:54.511 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:20:54.511 altname enp24s0f0np0 00:20:54.511 altname ens785f0np0 00:20:54.511 inet 192.168.100.8/24 scope global mlx_0_0 00:20:54.511 valid_lft forever preferred_lft forever 00:20:54.511 16:32:02 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:54.511 16:32:02 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:54.511 16:32:02 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:54.511 16:32:02 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:54.511 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:54.511 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:20:54.511 altname enp24s0f1np1 00:20:54.511 altname ens785f1np1 00:20:54.511 inet 192.168.100.9/24 scope global mlx_0_1 00:20:54.511 valid_lft forever preferred_lft forever 00:20:54.511 16:32:02 -- nvmf/common.sh@411 -- # return 0 00:20:54.511 16:32:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:54.511 16:32:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:54.511 16:32:02 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:20:54.511 16:32:02 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:54.511 16:32:02 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:54.511 16:32:02 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:54.511 16:32:02 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:54.511 16:32:02 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:54.511 16:32:02 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:54.511 16:32:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@105 -- # continue 2 00:20:54.511 16:32:02 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:54.511 16:32:02 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:54.511 16:32:02 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@105 -- # continue 2 00:20:54.511 16:32:02 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:54.511 16:32:02 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:54.511 16:32:02 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:54.511 16:32:02 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:54.511 16:32:02 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:54.511 16:32:02 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:20:54.511 192.168.100.9' 00:20:54.511 16:32:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:54.511 192.168.100.9' 00:20:54.511 16:32:02 -- nvmf/common.sh@446 -- # head -n 1 00:20:54.511 16:32:02 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:54.511 16:32:02 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:20:54.511 192.168.100.9' 00:20:54.511 16:32:02 -- nvmf/common.sh@447 -- # tail -n +2 00:20:54.511 16:32:02 -- nvmf/common.sh@447 -- # head -n 1 00:20:54.511 16:32:02 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:54.511 16:32:02 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:20:54.511 16:32:02 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:54.512 16:32:02 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:20:54.512 16:32:02 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:20:54.512 16:32:02 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:20:54.512 16:32:02 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:54.512 16:32:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:54.512 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:20:54.512 16:32:02 -- host/identify.sh@19 -- # nvmfpid=528959 00:20:54.512 16:32:02 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:54.512 16:32:02 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:54.512 16:32:02 -- host/identify.sh@23 -- # waitforlisten 528959 00:20:54.512 16:32:02 -- common/autotest_common.sh@817 -- # '[' -z 528959 ']' 00:20:54.512 16:32:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.512 16:32:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:54.512 16:32:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.512 16:32:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:54.512 16:32:02 -- common/autotest_common.sh@10 -- # set +x 00:20:54.512 [2024-04-26 16:32:03.014066] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:54.512 [2024-04-26 16:32:03.014130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.512 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.512 [2024-04-26 16:32:03.086767] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:54.512 [2024-04-26 16:32:03.172896] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.512 [2024-04-26 16:32:03.172939] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.512 [2024-04-26 16:32:03.172948] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.512 [2024-04-26 16:32:03.172972] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.512 [2024-04-26 16:32:03.172980] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.512 [2024-04-26 16:32:03.173046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.512 [2024-04-26 16:32:03.173129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.512 [2024-04-26 16:32:03.173211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:54.512 [2024-04-26 16:32:03.173212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.080 16:32:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:55.080 16:32:03 -- common/autotest_common.sh@850 -- # return 0 00:20:55.080 16:32:03 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:55.080 16:32:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.080 16:32:03 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 [2024-04-26 16:32:03.841580] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1599310/0x159d800) succeed. 00:20:55.080 [2024-04-26 16:32:03.851995] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x159a950/0x15dee90) succeed. 00:20:55.080 16:32:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.080 16:32:03 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:55.080 16:32:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:55.080 16:32:03 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 16:32:04 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:55.080 16:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.080 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 Malloc0 00:20:55.080 16:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.080 16:32:04 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:55.080 16:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.080 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 16:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.080 16:32:04 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:55.080 16:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.080 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 16:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.080 16:32:04 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:55.080 16:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.080 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 [2024-04-26 16:32:04.066321] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:55.080 16:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.080 16:32:04 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:55.080 16:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.080 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 16:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.080 16:32:04 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:55.080 16:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.080 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.080 [2024-04-26 16:32:04.082126] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:55.080 [ 00:20:55.080 { 00:20:55.080 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:55.080 "subtype": "Discovery", 00:20:55.080 "listen_addresses": [ 00:20:55.080 { 00:20:55.080 "transport": "RDMA", 00:20:55.080 "trtype": "RDMA", 00:20:55.080 "adrfam": "IPv4", 00:20:55.080 "traddr": "192.168.100.8", 00:20:55.080 "trsvcid": "4420" 00:20:55.080 } 00:20:55.080 ], 00:20:55.080 "allow_any_host": true, 00:20:55.080 "hosts": [] 00:20:55.080 }, 00:20:55.080 { 00:20:55.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:55.080 "subtype": "NVMe", 00:20:55.080 "listen_addresses": [ 00:20:55.080 { 00:20:55.080 "transport": "RDMA", 00:20:55.080 "trtype": "RDMA", 00:20:55.080 "adrfam": "IPv4", 00:20:55.080 "traddr": "192.168.100.8", 00:20:55.080 "trsvcid": "4420" 00:20:55.080 } 00:20:55.080 ], 00:20:55.080 "allow_any_host": true, 00:20:55.080 "hosts": [], 00:20:55.080 "serial_number": "SPDK00000000000001", 00:20:55.080 "model_number": "SPDK bdev Controller", 00:20:55.080 "max_namespaces": 32, 00:20:55.080 "min_cntlid": 1, 00:20:55.080 "max_cntlid": 65519, 00:20:55.080 "namespaces": [ 00:20:55.080 { 00:20:55.080 "nsid": 1, 00:20:55.080 "bdev_name": "Malloc0", 00:20:55.080 "name": "Malloc0", 00:20:55.080 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:55.080 "eui64": "ABCDEF0123456789", 00:20:55.080 "uuid": "b6781f14-9a88-429f-a786-6ebaa83dd87f" 00:20:55.080 } 00:20:55.080 ] 00:20:55.080 } 00:20:55.080 ] 00:20:55.080 16:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.080 16:32:04 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:55.349 [2024-04-26 16:32:04.127539] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:55.349 [2024-04-26 16:32:04.127578] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529164 ] 00:20:55.349 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.349 [2024-04-26 16:32:04.174661] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:55.349 [2024-04-26 16:32:04.174737] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:55.349 [2024-04-26 16:32:04.174753] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:55.349 [2024-04-26 16:32:04.174761] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:55.349 [2024-04-26 16:32:04.174793] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:55.349 [2024-04-26 16:32:04.180761] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:55.349 [2024-04-26 16:32:04.191023] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:55.349 [2024-04-26 16:32:04.191033] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:55.349 [2024-04-26 16:32:04.191042] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191049] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191055] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191062] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191068] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191075] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191081] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191087] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191094] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191100] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191107] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191113] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191119] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191126] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191132] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191139] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191145] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191151] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191158] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191164] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191171] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191177] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191183] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191190] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191196] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191202] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191209] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191218] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191225] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191231] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191238] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191243] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:55.349 [2024-04-26 16:32:04.191249] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:55.349 [2024-04-26 16:32:04.191254] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:55.349 [2024-04-26 16:32:04.191277] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.191291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x182300 00:20:55.349 [2024-04-26 16:32:04.196351] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.349 [2024-04-26 16:32:04.196361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:55.349 [2024-04-26 16:32:04.196369] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182300 00:20:55.349 [2024-04-26 16:32:04.196376] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:55.349 [2024-04-26 16:32:04.196384] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:55.349 [2024-04-26 16:32:04.196391] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:55.349 [2024-04-26 16:32:04.196407] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.350 [2024-04-26 16:32:04.196441] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.196447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.196457] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:55.350 [2024-04-26 16:32:04.196463] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196470] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:55.350 [2024-04-26 16:32:04.196478] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.350 [2024-04-26 16:32:04.196505] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.196511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.196517] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:55.350 [2024-04-26 16:32:04.196524] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196531] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:55.350 [2024-04-26 16:32:04.196540] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.350 [2024-04-26 16:32:04.196566] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.196571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.196578] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:55.350 [2024-04-26 16:32:04.196584] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196593] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.350 [2024-04-26 16:32:04.196620] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.196625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.196632] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:55.350 [2024-04-26 16:32:04.196638] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:55.350 [2024-04-26 16:32:04.196645] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196652] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:55.350 [2024-04-26 16:32:04.196758] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:55.350 [2024-04-26 16:32:04.196765] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:55.350 [2024-04-26 16:32:04.196774] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.350 [2024-04-26 16:32:04.196798] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.196803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.196810] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:55.350 [2024-04-26 16:32:04.196816] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196824] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.350 [2024-04-26 16:32:04.196853] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.196859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.196865] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:55.350 [2024-04-26 16:32:04.196873] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:55.350 [2024-04-26 16:32:04.196880] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196887] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:55.350 [2024-04-26 16:32:04.196895] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:55.350 [2024-04-26 16:32:04.196905] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.196913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:20:55.350 [2024-04-26 16:32:04.196947] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.196953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.196962] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:55.350 [2024-04-26 16:32:04.196969] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:55.350 [2024-04-26 16:32:04.196975] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:55.350 [2024-04-26 16:32:04.196984] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:55.350 [2024-04-26 16:32:04.196990] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:55.350 [2024-04-26 16:32:04.196996] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:55.350 [2024-04-26 16:32:04.197002] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197010] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:55.350 [2024-04-26 16:32:04.197018] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.350 [2024-04-26 16:32:04.197050] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.197056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.197066] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.350 [2024-04-26 16:32:04.197080] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.350 [2024-04-26 16:32:04.197095] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.350 [2024-04-26 16:32:04.197109] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.350 [2024-04-26 16:32:04.197124] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:55.350 [2024-04-26 16:32:04.197130] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197140] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:55.350 [2024-04-26 16:32:04.197148] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197156] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.350 [2024-04-26 16:32:04.197175] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.197181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.197188] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:55.350 [2024-04-26 16:32:04.197194] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:55.350 [2024-04-26 16:32:04.197200] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197210] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.350 [2024-04-26 16:32:04.197218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:20:55.350 [2024-04-26 16:32:04.197239] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.350 [2024-04-26 16:32:04.197245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:55.350 [2024-04-26 16:32:04.197253] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182300 00:20:55.351 [2024-04-26 16:32:04.197263] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:55.351 [2024-04-26 16:32:04.197283] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.351 [2024-04-26 16:32:04.197292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182300 00:20:55.351 [2024-04-26 16:32:04.197299] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182300 00:20:55.351 [2024-04-26 16:32:04.197307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.351 [2024-04-26 16:32:04.197321] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.351 [2024-04-26 16:32:04.197327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:55.351 [2024-04-26 16:32:04.197339] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x182300 00:20:55.351 [2024-04-26 16:32:04.197350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182300 00:20:55.351 [2024-04-26 16:32:04.197357] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182300 00:20:55.351 [2024-04-26 16:32:04.197364] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.351 [2024-04-26 16:32:04.197371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:55.351 [2024-04-26 16:32:04.197377] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182300 00:20:55.351 [2024-04-26 16:32:04.197384] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.351 [2024-04-26 16:32:04.197390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:55.351 [2024-04-26 16:32:04.197400] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182300 00:20:55.351 [2024-04-26 16:32:04.197407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182300 00:20:55.351 [2024-04-26 16:32:04.197414] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182300 00:20:55.351 [2024-04-26 16:32:04.197432] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.351 [2024-04-26 16:32:04.197438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:55.351 [2024-04-26 16:32:04.197449] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182300 00:20:55.351 ===================================================== 00:20:55.351 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:55.351 ===================================================== 00:20:55.351 Controller Capabilities/Features 00:20:55.351 ================================ 00:20:55.351 Vendor ID: 0000 00:20:55.351 Subsystem Vendor ID: 0000 00:20:55.351 Serial Number: .................... 00:20:55.351 Model Number: ........................................ 00:20:55.351 Firmware Version: 24.05 00:20:55.351 Recommended Arb Burst: 0 00:20:55.351 IEEE OUI Identifier: 00 00 00 00:20:55.351 Multi-path I/O 00:20:55.351 May have multiple subsystem ports: No 00:20:55.351 May have multiple controllers: No 00:20:55.351 Associated with SR-IOV VF: No 00:20:55.351 Max Data Transfer Size: 131072 00:20:55.351 Max Number of Namespaces: 0 00:20:55.351 Max Number of I/O Queues: 1024 00:20:55.351 NVMe Specification Version (VS): 1.3 00:20:55.351 NVMe Specification Version (Identify): 1.3 00:20:55.351 Maximum Queue Entries: 128 00:20:55.351 Contiguous Queues Required: Yes 00:20:55.351 Arbitration Mechanisms Supported 00:20:55.351 Weighted Round Robin: Not Supported 00:20:55.351 Vendor Specific: Not Supported 00:20:55.351 Reset Timeout: 15000 ms 00:20:55.351 Doorbell Stride: 4 bytes 00:20:55.351 NVM Subsystem Reset: Not Supported 00:20:55.351 Command Sets Supported 00:20:55.351 NVM Command Set: Supported 00:20:55.351 Boot Partition: Not Supported 00:20:55.351 Memory Page Size Minimum: 4096 bytes 00:20:55.351 Memory Page Size Maximum: 4096 bytes 00:20:55.351 Persistent Memory Region: Not Supported 00:20:55.351 Optional Asynchronous Events Supported 00:20:55.351 Namespace Attribute Notices: Not Supported 00:20:55.351 Firmware Activation Notices: Not Supported 00:20:55.351 ANA Change Notices: Not Supported 00:20:55.351 PLE Aggregate Log Change Notices: Not Supported 00:20:55.351 LBA Status Info Alert Notices: Not Supported 00:20:55.351 EGE Aggregate Log Change Notices: Not Supported 00:20:55.351 Normal NVM Subsystem Shutdown event: Not Supported 00:20:55.351 Zone Descriptor Change Notices: Not Supported 00:20:55.351 Discovery Log Change Notices: Supported 00:20:55.351 Controller Attributes 00:20:55.351 128-bit Host Identifier: Not Supported 00:20:55.351 Non-Operational Permissive Mode: Not Supported 00:20:55.351 NVM Sets: Not Supported 00:20:55.351 Read Recovery Levels: Not Supported 00:20:55.351 Endurance Groups: Not Supported 00:20:55.351 Predictable Latency Mode: Not Supported 00:20:55.351 Traffic Based Keep ALive: Not Supported 00:20:55.351 Namespace Granularity: Not Supported 00:20:55.351 SQ Associations: Not Supported 00:20:55.351 UUID List: Not Supported 00:20:55.351 Multi-Domain Subsystem: Not Supported 00:20:55.351 Fixed Capacity Management: Not Supported 00:20:55.351 Variable Capacity Management: Not Supported 00:20:55.351 Delete Endurance Group: Not Supported 00:20:55.351 Delete NVM Set: Not Supported 00:20:55.351 Extended LBA Formats Supported: Not Supported 00:20:55.351 Flexible Data Placement Supported: Not Supported 00:20:55.351 00:20:55.351 Controller Memory Buffer Support 00:20:55.351 ================================ 00:20:55.351 Supported: No 00:20:55.351 00:20:55.351 Persistent Memory Region Support 00:20:55.351 ================================ 00:20:55.351 Supported: No 00:20:55.351 00:20:55.351 Admin Command Set Attributes 00:20:55.351 ============================ 00:20:55.351 Security Send/Receive: Not Supported 00:20:55.351 Format NVM: Not Supported 00:20:55.351 Firmware Activate/Download: Not Supported 00:20:55.351 Namespace Management: Not Supported 00:20:55.351 Device Self-Test: Not Supported 00:20:55.351 Directives: Not Supported 00:20:55.351 NVMe-MI: Not Supported 00:20:55.351 Virtualization Management: Not Supported 00:20:55.351 Doorbell Buffer Config: Not Supported 00:20:55.351 Get LBA Status Capability: Not Supported 00:20:55.351 Command & Feature Lockdown Capability: Not Supported 00:20:55.351 Abort Command Limit: 1 00:20:55.351 Async Event Request Limit: 4 00:20:55.351 Number of Firmware Slots: N/A 00:20:55.351 Firmware Slot 1 Read-Only: N/A 00:20:55.351 Firmware Activation Without Reset: N/A 00:20:55.351 Multiple Update Detection Support: N/A 00:20:55.351 Firmware Update Granularity: No Information Provided 00:20:55.351 Per-Namespace SMART Log: No 00:20:55.351 Asymmetric Namespace Access Log Page: Not Supported 00:20:55.351 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:55.351 Command Effects Log Page: Not Supported 00:20:55.351 Get Log Page Extended Data: Supported 00:20:55.351 Telemetry Log Pages: Not Supported 00:20:55.351 Persistent Event Log Pages: Not Supported 00:20:55.351 Supported Log Pages Log Page: May Support 00:20:55.351 Commands Supported & Effects Log Page: Not Supported 00:20:55.351 Feature Identifiers & Effects Log Page:May Support 00:20:55.351 NVMe-MI Commands & Effects Log Page: May Support 00:20:55.351 Data Area 4 for Telemetry Log: Not Supported 00:20:55.351 Error Log Page Entries Supported: 128 00:20:55.351 Keep Alive: Not Supported 00:20:55.351 00:20:55.351 NVM Command Set Attributes 00:20:55.351 ========================== 00:20:55.351 Submission Queue Entry Size 00:20:55.351 Max: 1 00:20:55.351 Min: 1 00:20:55.351 Completion Queue Entry Size 00:20:55.351 Max: 1 00:20:55.351 Min: 1 00:20:55.351 Number of Namespaces: 0 00:20:55.351 Compare Command: Not Supported 00:20:55.351 Write Uncorrectable Command: Not Supported 00:20:55.351 Dataset Management Command: Not Supported 00:20:55.351 Write Zeroes Command: Not Supported 00:20:55.351 Set Features Save Field: Not Supported 00:20:55.351 Reservations: Not Supported 00:20:55.351 Timestamp: Not Supported 00:20:55.351 Copy: Not Supported 00:20:55.351 Volatile Write Cache: Not Present 00:20:55.351 Atomic Write Unit (Normal): 1 00:20:55.351 Atomic Write Unit (PFail): 1 00:20:55.351 Atomic Compare & Write Unit: 1 00:20:55.351 Fused Compare & Write: Supported 00:20:55.351 Scatter-Gather List 00:20:55.351 SGL Command Set: Supported 00:20:55.351 SGL Keyed: Supported 00:20:55.351 SGL Bit Bucket Descriptor: Not Supported 00:20:55.351 SGL Metadata Pointer: Not Supported 00:20:55.351 Oversized SGL: Not Supported 00:20:55.351 SGL Metadata Address: Not Supported 00:20:55.351 SGL Offset: Supported 00:20:55.351 Transport SGL Data Block: Not Supported 00:20:55.351 Replay Protected Memory Block: Not Supported 00:20:55.351 00:20:55.351 Firmware Slot Information 00:20:55.351 ========================= 00:20:55.351 Active slot: 0 00:20:55.351 00:20:55.351 00:20:55.352 Error Log 00:20:55.352 ========= 00:20:55.352 00:20:55.352 Active Namespaces 00:20:55.352 ================= 00:20:55.352 Discovery Log Page 00:20:55.352 ================== 00:20:55.352 Generation Counter: 2 00:20:55.352 Number of Records: 2 00:20:55.352 Record Format: 0 00:20:55.352 00:20:55.352 Discovery Log Entry 0 00:20:55.352 ---------------------- 00:20:55.352 Transport Type: 1 (RDMA) 00:20:55.352 Address Family: 1 (IPv4) 00:20:55.352 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:55.352 Entry Flags: 00:20:55.352 Duplicate Returned Information: 1 00:20:55.352 Explicit Persistent Connection Support for Discovery: 1 00:20:55.352 Transport Requirements: 00:20:55.352 Secure Channel: Not Required 00:20:55.352 Port ID: 0 (0x0000) 00:20:55.352 Controller ID: 65535 (0xffff) 00:20:55.352 Admin Max SQ Size: 128 00:20:55.352 Transport Service Identifier: 4420 00:20:55.352 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:55.352 Transport Address: 192.168.100.8 00:20:55.352 Transport Specific Address Subtype - RDMA 00:20:55.352 RDMA QP Service Type: 1 (Reliable Connected) 00:20:55.352 RDMA Provider Type: 1 (No provider specified) 00:20:55.352 RDMA CM Service: 1 (RDMA_CM) 00:20:55.352 Discovery Log Entry 1 00:20:55.352 ---------------------- 00:20:55.352 Transport Type: 1 (RDMA) 00:20:55.352 Address Family: 1 (IPv4) 00:20:55.352 Subsystem Type: 2 (NVM Subsystem) 00:20:55.352 Entry Flags: 00:20:55.352 Duplicate Returned Information: 0 00:20:55.352 Explicit Persistent Connection Support for Discovery: 0 00:20:55.352 Transport Requirements: 00:20:55.352 Secure Channel: Not Required 00:20:55.352 Port ID: 0 (0x0000) 00:20:55.352 Controller ID: 65535 (0xffff) 00:20:55.352 Admin Max SQ Size: [2024-04-26 16:32:04.197521] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:55.352 [2024-04-26 16:32:04.197531] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10638 doesn't match qid 00:20:55.352 [2024-04-26 16:32:04.197545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32546 cdw0:5 sqhd:7790 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197552] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10638 doesn't match qid 00:20:55.352 [2024-04-26 16:32:04.197560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32546 cdw0:5 sqhd:7790 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197567] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10638 doesn't match qid 00:20:55.352 [2024-04-26 16:32:04.197575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32546 cdw0:5 sqhd:7790 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197582] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 10638 doesn't match qid 00:20:55.352 [2024-04-26 16:32:04.197589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32546 cdw0:5 sqhd:7790 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197598] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.197627] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.197633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197644] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.197659] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197677] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.197683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197689] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:55.352 [2024-04-26 16:32:04.197696] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:55.352 [2024-04-26 16:32:04.197703] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197712] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.197738] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.197744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197750] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197759] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.197782] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.197788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197794] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197804] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.197833] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.197839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197845] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197854] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.197886] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.197892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197898] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197907] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.197929] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.197935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197942] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197951] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.197973] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.197980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.197986] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.197995] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.198003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.198023] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.198029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.198035] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.198044] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.198052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.198074] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.198079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.198086] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.198095] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.198103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.352 [2024-04-26 16:32:04.198120] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.352 [2024-04-26 16:32:04.198126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:55.352 [2024-04-26 16:32:04.198133] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.198142] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.352 [2024-04-26 16:32:04.198150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198167] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198179] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198188] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198215] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198227] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198236] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198263] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198275] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198284] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198309] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198321] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198330] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198357] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198369] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198378] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198407] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198419] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198428] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198454] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198466] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198475] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198506] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198518] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198527] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198552] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198564] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198573] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198602] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198614] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198623] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198647] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198659] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198668] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198697] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198709] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198718] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198745] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198757] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198766] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198793] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198805] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198814] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198837] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198849] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198858] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198888] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198900] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198909] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.353 [2024-04-26 16:32:04.198938] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.353 [2024-04-26 16:32:04.198943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:55.353 [2024-04-26 16:32:04.198950] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198959] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.353 [2024-04-26 16:32:04.198967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.198984] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.198990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.198996] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199005] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199031] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199043] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199052] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199085] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199097] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199106] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199138] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199150] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199159] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199183] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199195] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199204] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199229] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199241] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199250] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199281] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199293] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199302] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199329] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199342] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199354] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199382] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199394] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199402] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199429] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199442] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199450] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199478] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199490] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199499] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199524] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199536] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199545] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199567] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199579] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199588] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199615] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199627] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199636] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199660] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199672] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199682] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.354 [2024-04-26 16:32:04.199711] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.354 [2024-04-26 16:32:04.199717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:55.354 [2024-04-26 16:32:04.199723] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199732] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.354 [2024-04-26 16:32:04.199740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.199756] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.199761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.199768] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199777] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.199806] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.199811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.199818] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199827] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.199856] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.199862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.199868] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199877] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.199902] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.199908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.199915] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199923] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.199949] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.199954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.199963] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199972] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.199979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.199995] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.200001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.200007] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200016] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.200040] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.200045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.200052] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200061] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.200084] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.200090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.200096] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200105] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.200130] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.200136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.200143] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200151] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.200175] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.200180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.200187] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200196] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.200225] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.200231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.200239] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200248] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.200273] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.200279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.200285] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200294] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.200323] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.200329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.200335] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.200344] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.204359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.355 [2024-04-26 16:32:04.204376] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.355 [2024-04-26 16:32:04.204382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:20:55.355 [2024-04-26 16:32:04.204389] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.204396] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:55.355 128 00:20:55.355 Transport Service Identifier: 4420 00:20:55.355 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:55.355 Transport Address: 192.168.100.8 00:20:55.355 Transport Specific Address Subtype - RDMA 00:20:55.355 RDMA QP Service Type: 1 (Reliable Connected) 00:20:55.355 RDMA Provider Type: 1 (No provider specified) 00:20:55.355 RDMA CM Service: 1 (RDMA_CM) 00:20:55.355 16:32:04 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:55.355 [2024-04-26 16:32:04.278521] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:20:55.355 [2024-04-26 16:32:04.278561] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529166 ] 00:20:55.355 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.355 [2024-04-26 16:32:04.323748] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:55.355 [2024-04-26 16:32:04.323824] nvme_rdma.c:2261:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:55.355 [2024-04-26 16:32:04.323839] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:55.355 [2024-04-26 16:32:04.323846] nvme_rdma.c:1295:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:55.355 [2024-04-26 16:32:04.323869] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:55.355 [2024-04-26 16:32:04.334826] nvme_rdma.c: 510:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:55.355 [2024-04-26 16:32:04.345088] nvme_rdma.c:1180:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:55.355 [2024-04-26 16:32:04.345099] nvme_rdma.c:1185:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:55.355 [2024-04-26 16:32:04.345106] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.345113] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.345119] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182300 00:20:55.355 [2024-04-26 16:32:04.345126] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345132] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345138] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345145] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345151] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345157] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345164] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345170] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345176] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345183] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345189] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345195] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345202] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345208] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345214] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345221] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345227] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345233] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345240] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345246] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345252] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345259] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345265] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345271] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345280] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345286] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345292] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345299] nvme_rdma.c: 968:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345305] nvme_rdma.c:1199:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:55.356 [2024-04-26 16:32:04.345310] nvme_rdma.c:1202:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:55.356 [2024-04-26 16:32:04.345315] nvme_rdma.c:1207:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:55.356 [2024-04-26 16:32:04.345331] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.345343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf200 len:0x400 key:0x182300 00:20:55.356 [2024-04-26 16:32:04.350352] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.356 [2024-04-26 16:32:04.350362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:55.356 [2024-04-26 16:32:04.350370] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350378] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:55.356 [2024-04-26 16:32:04.350385] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:55.356 [2024-04-26 16:32:04.350392] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:55.356 [2024-04-26 16:32:04.350406] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.356 [2024-04-26 16:32:04.350432] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.356 [2024-04-26 16:32:04.350438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:55.356 [2024-04-26 16:32:04.350447] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:55.356 [2024-04-26 16:32:04.350453] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350461] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:55.356 [2024-04-26 16:32:04.350469] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.356 [2024-04-26 16:32:04.350493] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.356 [2024-04-26 16:32:04.350498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:55.356 [2024-04-26 16:32:04.350505] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:55.356 [2024-04-26 16:32:04.350511] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350519] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:55.356 [2024-04-26 16:32:04.350526] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.356 [2024-04-26 16:32:04.350554] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.356 [2024-04-26 16:32:04.350560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:55.356 [2024-04-26 16:32:04.350567] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:55.356 [2024-04-26 16:32:04.350573] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350582] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.356 [2024-04-26 16:32:04.350604] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.356 [2024-04-26 16:32:04.350610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:55.356 [2024-04-26 16:32:04.350616] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:55.356 [2024-04-26 16:32:04.350622] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:55.356 [2024-04-26 16:32:04.350628] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350635] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:55.356 [2024-04-26 16:32:04.350742] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:55.356 [2024-04-26 16:32:04.350747] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:55.356 [2024-04-26 16:32:04.350756] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.356 [2024-04-26 16:32:04.350784] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.356 [2024-04-26 16:32:04.350790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:55.356 [2024-04-26 16:32:04.350796] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:55.356 [2024-04-26 16:32:04.350802] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350811] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.356 [2024-04-26 16:32:04.350837] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.356 [2024-04-26 16:32:04.350843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:55.356 [2024-04-26 16:32:04.350849] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:55.356 [2024-04-26 16:32:04.350855] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:55.356 [2024-04-26 16:32:04.350863] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350870] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:55.356 [2024-04-26 16:32:04.350882] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:55.356 [2024-04-26 16:32:04.350892] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.356 [2024-04-26 16:32:04.350900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:20:55.356 [2024-04-26 16:32:04.350939] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.356 [2024-04-26 16:32:04.350944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:55.356 [2024-04-26 16:32:04.350954] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:55.357 [2024-04-26 16:32:04.350960] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:55.357 [2024-04-26 16:32:04.350966] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:55.357 [2024-04-26 16:32:04.350973] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:55.357 [2024-04-26 16:32:04.350979] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:55.357 [2024-04-26 16:32:04.350985] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.350992] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.350999] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351007] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.357 [2024-04-26 16:32:04.351031] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:55.357 [2024-04-26 16:32:04.351045] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0500 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.357 [2024-04-26 16:32:04.351060] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0640 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.357 [2024-04-26 16:32:04.351074] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.357 [2024-04-26 16:32:04.351089] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.357 [2024-04-26 16:32:04.351102] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351109] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351120] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351127] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.357 [2024-04-26 16:32:04.351153] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:55.357 [2024-04-26 16:32:04.351166] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:55.357 [2024-04-26 16:32:04.351172] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351178] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351186] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351193] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351201] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.357 [2024-04-26 16:32:04.351230] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:20:55.357 [2024-04-26 16:32:04.351277] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351284] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351292] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351301] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182300 00:20:55.357 [2024-04-26 16:32:04.351333] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:55.357 [2024-04-26 16:32:04.351354] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:55.357 [2024-04-26 16:32:04.351364] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351371] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351379] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351387] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:20:55.357 [2024-04-26 16:32:04.351430] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:55.357 [2024-04-26 16:32:04.351448] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351455] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351463] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351471] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182300 00:20:55.357 [2024-04-26 16:32:04.351504] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:55.357 [2024-04-26 16:32:04.351519] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351526] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351533] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351542] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351549] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351556] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351563] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:55.357 [2024-04-26 16:32:04.351569] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:55.357 [2024-04-26 16:32:04.351575] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:55.357 [2024-04-26 16:32:04.351590] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.357 [2024-04-26 16:32:04.351606] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:55.357 [2024-04-26 16:32:04.351624] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:55.357 [2024-04-26 16:32:04.351636] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351644] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:55.357 [2024-04-26 16:32:04.351656] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351666] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182300 00:20:55.357 [2024-04-26 16:32:04.351673] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.357 [2024-04-26 16:32:04.351693] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.357 [2024-04-26 16:32:04.351699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:55.358 [2024-04-26 16:32:04.351705] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351714] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.358 [2024-04-26 16:32:04.351742] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.358 [2024-04-26 16:32:04.351747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:55.358 [2024-04-26 16:32:04.351754] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351763] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351771] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.358 [2024-04-26 16:32:04.351793] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.358 [2024-04-26 16:32:04.351799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:20:55.358 [2024-04-26 16:32:04.351806] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351817] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a00 length 0x40 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182300 00:20:55.358 [2024-04-26 16:32:04.351833] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d03c0 length 0x40 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182300 00:20:55.358 [2024-04-26 16:32:04.351850] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b40 length 0x40 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182300 00:20:55.358 [2024-04-26 16:32:04.351866] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182300 00:20:55.358 [2024-04-26 16:32:04.351882] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.358 [2024-04-26 16:32:04.351891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:55.358 [2024-04-26 16:32:04.351903] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351910] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.358 [2024-04-26 16:32:04.351915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:55.358 [2024-04-26 16:32:04.351924] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351931] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.358 [2024-04-26 16:32:04.351937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:55.358 [2024-04-26 16:32:04.351944] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182300 00:20:55.358 [2024-04-26 16:32:04.351950] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.358 [2024-04-26 16:32:04.351956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:55.358 [2024-04-26 16:32:04.351967] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182300 00:20:55.358 ===================================================== 00:20:55.358 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:55.358 ===================================================== 00:20:55.358 Controller Capabilities/Features 00:20:55.358 ================================ 00:20:55.358 Vendor ID: 8086 00:20:55.358 Subsystem Vendor ID: 8086 00:20:55.358 Serial Number: SPDK00000000000001 00:20:55.358 Model Number: SPDK bdev Controller 00:20:55.358 Firmware Version: 24.05 00:20:55.358 Recommended Arb Burst: 6 00:20:55.358 IEEE OUI Identifier: e4 d2 5c 00:20:55.358 Multi-path I/O 00:20:55.358 May have multiple subsystem ports: Yes 00:20:55.358 May have multiple controllers: Yes 00:20:55.358 Associated with SR-IOV VF: No 00:20:55.358 Max Data Transfer Size: 131072 00:20:55.358 Max Number of Namespaces: 32 00:20:55.358 Max Number of I/O Queues: 127 00:20:55.358 NVMe Specification Version (VS): 1.3 00:20:55.358 NVMe Specification Version (Identify): 1.3 00:20:55.358 Maximum Queue Entries: 128 00:20:55.358 Contiguous Queues Required: Yes 00:20:55.358 Arbitration Mechanisms Supported 00:20:55.358 Weighted Round Robin: Not Supported 00:20:55.358 Vendor Specific: Not Supported 00:20:55.358 Reset Timeout: 15000 ms 00:20:55.358 Doorbell Stride: 4 bytes 00:20:55.358 NVM Subsystem Reset: Not Supported 00:20:55.358 Command Sets Supported 00:20:55.358 NVM Command Set: Supported 00:20:55.358 Boot Partition: Not Supported 00:20:55.358 Memory Page Size Minimum: 4096 bytes 00:20:55.358 Memory Page Size Maximum: 4096 bytes 00:20:55.358 Persistent Memory Region: Not Supported 00:20:55.358 Optional Asynchronous Events Supported 00:20:55.358 Namespace Attribute Notices: Supported 00:20:55.358 Firmware Activation Notices: Not Supported 00:20:55.358 ANA Change Notices: Not Supported 00:20:55.358 PLE Aggregate Log Change Notices: Not Supported 00:20:55.358 LBA Status Info Alert Notices: Not Supported 00:20:55.358 EGE Aggregate Log Change Notices: Not Supported 00:20:55.358 Normal NVM Subsystem Shutdown event: Not Supported 00:20:55.358 Zone Descriptor Change Notices: Not Supported 00:20:55.358 Discovery Log Change Notices: Not Supported 00:20:55.358 Controller Attributes 00:20:55.358 128-bit Host Identifier: Supported 00:20:55.358 Non-Operational Permissive Mode: Not Supported 00:20:55.358 NVM Sets: Not Supported 00:20:55.358 Read Recovery Levels: Not Supported 00:20:55.358 Endurance Groups: Not Supported 00:20:55.358 Predictable Latency Mode: Not Supported 00:20:55.358 Traffic Based Keep ALive: Not Supported 00:20:55.358 Namespace Granularity: Not Supported 00:20:55.358 SQ Associations: Not Supported 00:20:55.358 UUID List: Not Supported 00:20:55.358 Multi-Domain Subsystem: Not Supported 00:20:55.358 Fixed Capacity Management: Not Supported 00:20:55.358 Variable Capacity Management: Not Supported 00:20:55.358 Delete Endurance Group: Not Supported 00:20:55.358 Delete NVM Set: Not Supported 00:20:55.358 Extended LBA Formats Supported: Not Supported 00:20:55.358 Flexible Data Placement Supported: Not Supported 00:20:55.358 00:20:55.358 Controller Memory Buffer Support 00:20:55.358 ================================ 00:20:55.358 Supported: No 00:20:55.358 00:20:55.358 Persistent Memory Region Support 00:20:55.358 ================================ 00:20:55.358 Supported: No 00:20:55.358 00:20:55.358 Admin Command Set Attributes 00:20:55.358 ============================ 00:20:55.358 Security Send/Receive: Not Supported 00:20:55.358 Format NVM: Not Supported 00:20:55.358 Firmware Activate/Download: Not Supported 00:20:55.358 Namespace Management: Not Supported 00:20:55.358 Device Self-Test: Not Supported 00:20:55.358 Directives: Not Supported 00:20:55.358 NVMe-MI: Not Supported 00:20:55.358 Virtualization Management: Not Supported 00:20:55.359 Doorbell Buffer Config: Not Supported 00:20:55.359 Get LBA Status Capability: Not Supported 00:20:55.359 Command & Feature Lockdown Capability: Not Supported 00:20:55.359 Abort Command Limit: 4 00:20:55.359 Async Event Request Limit: 4 00:20:55.359 Number of Firmware Slots: N/A 00:20:55.359 Firmware Slot 1 Read-Only: N/A 00:20:55.359 Firmware Activation Without Reset: N/A 00:20:55.359 Multiple Update Detection Support: N/A 00:20:55.359 Firmware Update Granularity: No Information Provided 00:20:55.359 Per-Namespace SMART Log: No 00:20:55.359 Asymmetric Namespace Access Log Page: Not Supported 00:20:55.359 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:55.359 Command Effects Log Page: Supported 00:20:55.359 Get Log Page Extended Data: Supported 00:20:55.359 Telemetry Log Pages: Not Supported 00:20:55.359 Persistent Event Log Pages: Not Supported 00:20:55.359 Supported Log Pages Log Page: May Support 00:20:55.359 Commands Supported & Effects Log Page: Not Supported 00:20:55.359 Feature Identifiers & Effects Log Page:May Support 00:20:55.359 NVMe-MI Commands & Effects Log Page: May Support 00:20:55.359 Data Area 4 for Telemetry Log: Not Supported 00:20:55.359 Error Log Page Entries Supported: 128 00:20:55.359 Keep Alive: Supported 00:20:55.359 Keep Alive Granularity: 10000 ms 00:20:55.359 00:20:55.359 NVM Command Set Attributes 00:20:55.359 ========================== 00:20:55.359 Submission Queue Entry Size 00:20:55.359 Max: 64 00:20:55.359 Min: 64 00:20:55.359 Completion Queue Entry Size 00:20:55.359 Max: 16 00:20:55.359 Min: 16 00:20:55.359 Number of Namespaces: 32 00:20:55.359 Compare Command: Supported 00:20:55.359 Write Uncorrectable Command: Not Supported 00:20:55.359 Dataset Management Command: Supported 00:20:55.359 Write Zeroes Command: Supported 00:20:55.359 Set Features Save Field: Not Supported 00:20:55.359 Reservations: Supported 00:20:55.359 Timestamp: Not Supported 00:20:55.359 Copy: Supported 00:20:55.359 Volatile Write Cache: Present 00:20:55.359 Atomic Write Unit (Normal): 1 00:20:55.359 Atomic Write Unit (PFail): 1 00:20:55.359 Atomic Compare & Write Unit: 1 00:20:55.359 Fused Compare & Write: Supported 00:20:55.359 Scatter-Gather List 00:20:55.359 SGL Command Set: Supported 00:20:55.359 SGL Keyed: Supported 00:20:55.359 SGL Bit Bucket Descriptor: Not Supported 00:20:55.359 SGL Metadata Pointer: Not Supported 00:20:55.359 Oversized SGL: Not Supported 00:20:55.359 SGL Metadata Address: Not Supported 00:20:55.359 SGL Offset: Supported 00:20:55.359 Transport SGL Data Block: Not Supported 00:20:55.359 Replay Protected Memory Block: Not Supported 00:20:55.359 00:20:55.359 Firmware Slot Information 00:20:55.359 ========================= 00:20:55.359 Active slot: 1 00:20:55.359 Slot 1 Firmware Revision: 24.05 00:20:55.359 00:20:55.359 00:20:55.359 Commands Supported and Effects 00:20:55.359 ============================== 00:20:55.359 Admin Commands 00:20:55.359 -------------- 00:20:55.359 Get Log Page (02h): Supported 00:20:55.359 Identify (06h): Supported 00:20:55.359 Abort (08h): Supported 00:20:55.359 Set Features (09h): Supported 00:20:55.359 Get Features (0Ah): Supported 00:20:55.359 Asynchronous Event Request (0Ch): Supported 00:20:55.359 Keep Alive (18h): Supported 00:20:55.359 I/O Commands 00:20:55.359 ------------ 00:20:55.359 Flush (00h): Supported LBA-Change 00:20:55.359 Write (01h): Supported LBA-Change 00:20:55.359 Read (02h): Supported 00:20:55.359 Compare (05h): Supported 00:20:55.359 Write Zeroes (08h): Supported LBA-Change 00:20:55.359 Dataset Management (09h): Supported LBA-Change 00:20:55.359 Copy (19h): Supported LBA-Change 00:20:55.359 Unknown (79h): Supported LBA-Change 00:20:55.359 Unknown (7Ah): Supported 00:20:55.359 00:20:55.359 Error Log 00:20:55.359 ========= 00:20:55.359 00:20:55.359 Arbitration 00:20:55.359 =========== 00:20:55.359 Arbitration Burst: 1 00:20:55.359 00:20:55.359 Power Management 00:20:55.359 ================ 00:20:55.359 Number of Power States: 1 00:20:55.359 Current Power State: Power State #0 00:20:55.359 Power State #0: 00:20:55.359 Max Power: 0.00 W 00:20:55.359 Non-Operational State: Operational 00:20:55.359 Entry Latency: Not Reported 00:20:55.359 Exit Latency: Not Reported 00:20:55.359 Relative Read Throughput: 0 00:20:55.359 Relative Read Latency: 0 00:20:55.359 Relative Write Throughput: 0 00:20:55.359 Relative Write Latency: 0 00:20:55.359 Idle Power: Not Reported 00:20:55.359 Active Power: Not Reported 00:20:55.359 Non-Operational Permissive Mode: Not Supported 00:20:55.359 00:20:55.359 Health Information 00:20:55.359 ================== 00:20:55.359 Critical Warnings: 00:20:55.359 Available Spare Space: OK 00:20:55.359 Temperature: OK 00:20:55.359 Device Reliability: OK 00:20:55.359 Read Only: No 00:20:55.359 Volatile Memory Backup: OK 00:20:55.359 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:55.359 Temperature Threshold: [2024-04-26 16:32:04.352049] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c80 length 0x40 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352058] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.359 [2024-04-26 16:32:04.352075] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.359 [2024-04-26 16:32:04.352081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352087] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352111] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:55.359 [2024-04-26 16:32:04.352121] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52663 doesn't match qid 00:20:55.359 [2024-04-26 16:32:04.352134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32753 cdw0:5 sqhd:f790 p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352141] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52663 doesn't match qid 00:20:55.359 [2024-04-26 16:32:04.352150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32753 cdw0:5 sqhd:f790 p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352156] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52663 doesn't match qid 00:20:55.359 [2024-04-26 16:32:04.352165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32753 cdw0:5 sqhd:f790 p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352171] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 52663 doesn't match qid 00:20:55.359 [2024-04-26 16:32:04.352179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32753 cdw0:5 sqhd:f790 p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352188] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d08c0 length 0x40 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.359 [2024-04-26 16:32:04.352212] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.359 [2024-04-26 16:32:04.352218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352226] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.359 [2024-04-26 16:32:04.352242] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352259] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.359 [2024-04-26 16:32:04.352265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352272] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:55.359 [2024-04-26 16:32:04.352277] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:55.359 [2024-04-26 16:32:04.352284] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352292] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.359 [2024-04-26 16:32:04.352314] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.359 [2024-04-26 16:32:04.352320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352327] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352336] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.359 [2024-04-26 16:32:04.352344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.359 [2024-04-26 16:32:04.352373] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.359 [2024-04-26 16:32:04.352378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:55.359 [2024-04-26 16:32:04.352385] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352394] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352421] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352434] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352443] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352467] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352479] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352488] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352521] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352534] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352543] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352569] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352581] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352590] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352619] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352632] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352641] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352670] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352683] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352692] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352723] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352735] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352744] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352770] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352782] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352791] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352820] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352832] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352841] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352870] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352882] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352891] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352915] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352927] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf798 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352936] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.352962] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.352967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.352974] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7c0 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352983] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.352991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.353007] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.353012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.353019] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e8 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353028] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.353053] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.353059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.353065] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf810 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353074] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.353099] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.353105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.353111] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf838 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353120] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.353151] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.353157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.353163] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf860 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353172] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.353198] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.353203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:55.360 [2024-04-26 16:32:04.353210] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf888 length 0x10 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353219] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.360 [2024-04-26 16:32:04.353227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.360 [2024-04-26 16:32:04.353248] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.360 [2024-04-26 16:32:04.353254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353260] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8b0 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353269] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353298] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353311] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d8 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353320] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353352] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353364] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf900 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353373] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353404] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353416] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf928 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353425] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353451] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353463] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf950 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353472] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353500] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353512] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf978 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353521] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353550] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353562] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9a0 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353571] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353597] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353609] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c8 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353618] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353640] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353652] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9f0 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353661] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353694] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353706] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa18 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353715] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353740] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353753] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa40 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353762] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353785] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353798] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa68 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353806] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353832] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353844] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa90 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353853] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353884] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353897] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab8 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353906] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353935] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353947] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfae0 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353957] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.353965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.353983] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.353989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.353995] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb08 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.354004] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.354012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.354028] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.361 [2024-04-26 16:32:04.354033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:55.361 [2024-04-26 16:32:04.354040] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb30 length 0x10 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.354049] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.361 [2024-04-26 16:32:04.354057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.361 [2024-04-26 16:32:04.354073] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.362 [2024-04-26 16:32:04.354078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:55.362 [2024-04-26 16:32:04.354085] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf680 length 0x10 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354094] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.362 [2024-04-26 16:32:04.354119] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.362 [2024-04-26 16:32:04.354125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:55.362 [2024-04-26 16:32:04.354131] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a8 length 0x10 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354140] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.362 [2024-04-26 16:32:04.354170] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.362 [2024-04-26 16:32:04.354175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:55.362 [2024-04-26 16:32:04.354182] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6d0 length 0x10 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354191] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.362 [2024-04-26 16:32:04.354214] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.362 [2024-04-26 16:32:04.354220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:55.362 [2024-04-26 16:32:04.354227] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f8 length 0x10 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354237] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.362 [2024-04-26 16:32:04.354270] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.362 [2024-04-26 16:32:04.354276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:55.362 [2024-04-26 16:32:04.354282] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf720 length 0x10 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354291] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.362 [2024-04-26 16:32:04.354318] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.362 [2024-04-26 16:32:04.354324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:55.362 [2024-04-26 16:32:04.354331] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf748 length 0x10 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.354340] nvme_rdma.c:2340:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0780 length 0x40 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.358353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:55.362 [2024-04-26 16:32:04.358368] nvme_rdma.c:2543:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:55.362 [2024-04-26 16:32:04.358374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0005 p:0 m:0 dnr:0 00:20:55.362 [2024-04-26 16:32:04.358380] nvme_rdma.c:2436:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf770 length 0x10 lkey 0x182300 00:20:55.362 [2024-04-26 16:32:04.358388] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:20:55.621 0 Kelvin (-273 Celsius) 00:20:55.621 Available Spare: 0% 00:20:55.621 Available Spare Threshold: 0% 00:20:55.621 Life Percentage Used: 0% 00:20:55.621 Data Units Read: 0 00:20:55.621 Data Units Written: 0 00:20:55.621 Host Read Commands: 0 00:20:55.621 Host Write Commands: 0 00:20:55.621 Controller Busy Time: 0 minutes 00:20:55.621 Power Cycles: 0 00:20:55.621 Power On Hours: 0 hours 00:20:55.621 Unsafe Shutdowns: 0 00:20:55.621 Unrecoverable Media Errors: 0 00:20:55.621 Lifetime Error Log Entries: 0 00:20:55.621 Warning Temperature Time: 0 minutes 00:20:55.621 Critical Temperature Time: 0 minutes 00:20:55.621 00:20:55.621 Number of Queues 00:20:55.621 ================ 00:20:55.621 Number of I/O Submission Queues: 127 00:20:55.621 Number of I/O Completion Queues: 127 00:20:55.621 00:20:55.621 Active Namespaces 00:20:55.621 ================= 00:20:55.621 Namespace ID:1 00:20:55.621 Error Recovery Timeout: Unlimited 00:20:55.621 Command Set Identifier: NVM (00h) 00:20:55.621 Deallocate: Supported 00:20:55.621 Deallocated/Unwritten Error: Not Supported 00:20:55.621 Deallocated Read Value: Unknown 00:20:55.621 Deallocate in Write Zeroes: Not Supported 00:20:55.621 Deallocated Guard Field: 0xFFFF 00:20:55.621 Flush: Supported 00:20:55.621 Reservation: Supported 00:20:55.621 Namespace Sharing Capabilities: Multiple Controllers 00:20:55.621 Size (in LBAs): 131072 (0GiB) 00:20:55.621 Capacity (in LBAs): 131072 (0GiB) 00:20:55.621 Utilization (in LBAs): 131072 (0GiB) 00:20:55.621 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:55.621 EUI64: ABCDEF0123456789 00:20:55.621 UUID: b6781f14-9a88-429f-a786-6ebaa83dd87f 00:20:55.621 Thin Provisioning: Not Supported 00:20:55.621 Per-NS Atomic Units: Yes 00:20:55.621 Atomic Boundary Size (Normal): 0 00:20:55.621 Atomic Boundary Size (PFail): 0 00:20:55.621 Atomic Boundary Offset: 0 00:20:55.621 Maximum Single Source Range Length: 65535 00:20:55.621 Maximum Copy Length: 65535 00:20:55.621 Maximum Source Range Count: 1 00:20:55.621 NGUID/EUI64 Never Reused: No 00:20:55.621 Namespace Write Protected: No 00:20:55.621 Number of LBA Formats: 1 00:20:55.621 Current LBA Format: LBA Format #00 00:20:55.621 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:55.621 00:20:55.621 16:32:04 -- host/identify.sh@51 -- # sync 00:20:55.621 16:32:04 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:55.621 16:32:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:55.621 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.621 16:32:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:55.621 16:32:04 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:55.621 16:32:04 -- host/identify.sh@56 -- # nvmftestfini 00:20:55.621 16:32:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:55.621 16:32:04 -- nvmf/common.sh@117 -- # sync 00:20:55.621 16:32:04 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:55.621 16:32:04 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:55.621 16:32:04 -- nvmf/common.sh@120 -- # set +e 00:20:55.621 16:32:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:55.621 16:32:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:55.621 rmmod nvme_rdma 00:20:55.621 rmmod nvme_fabrics 00:20:55.621 16:32:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:55.621 16:32:04 -- nvmf/common.sh@124 -- # set -e 00:20:55.621 16:32:04 -- nvmf/common.sh@125 -- # return 0 00:20:55.621 16:32:04 -- nvmf/common.sh@478 -- # '[' -n 528959 ']' 00:20:55.621 16:32:04 -- nvmf/common.sh@479 -- # killprocess 528959 00:20:55.621 16:32:04 -- common/autotest_common.sh@936 -- # '[' -z 528959 ']' 00:20:55.621 16:32:04 -- common/autotest_common.sh@940 -- # kill -0 528959 00:20:55.621 16:32:04 -- common/autotest_common.sh@941 -- # uname 00:20:55.621 16:32:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:55.621 16:32:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 528959 00:20:55.621 16:32:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:55.621 16:32:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:55.621 16:32:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 528959' 00:20:55.621 killing process with pid 528959 00:20:55.621 16:32:04 -- common/autotest_common.sh@955 -- # kill 528959 00:20:55.621 [2024-04-26 16:32:04.501836] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:55.621 16:32:04 -- common/autotest_common.sh@960 -- # wait 528959 00:20:55.621 [2024-04-26 16:32:04.586850] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:20:55.880 16:32:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:55.880 16:32:04 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:20:55.880 00:20:55.880 real 0m8.220s 00:20:55.880 user 0m8.220s 00:20:55.880 sys 0m5.229s 00:20:55.880 16:32:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:55.880 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:55.880 ************************************ 00:20:55.880 END TEST nvmf_identify 00:20:55.880 ************************************ 00:20:55.880 16:32:04 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:55.880 16:32:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:55.880 16:32:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:55.880 16:32:04 -- common/autotest_common.sh@10 -- # set +x 00:20:56.139 ************************************ 00:20:56.139 START TEST nvmf_perf 00:20:56.139 ************************************ 00:20:56.139 16:32:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:56.139 * Looking for test storage... 00:20:56.139 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:56.139 16:32:05 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.139 16:32:05 -- nvmf/common.sh@7 -- # uname -s 00:20:56.139 16:32:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.139 16:32:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.140 16:32:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.140 16:32:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.140 16:32:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.140 16:32:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.140 16:32:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.140 16:32:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.140 16:32:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.140 16:32:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.140 16:32:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:20:56.140 16:32:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:20:56.140 16:32:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.140 16:32:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.140 16:32:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.140 16:32:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.140 16:32:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:56.140 16:32:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.140 16:32:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.140 16:32:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.140 16:32:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.140 16:32:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.140 16:32:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.140 16:32:05 -- paths/export.sh@5 -- # export PATH 00:20:56.140 16:32:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.140 16:32:05 -- nvmf/common.sh@47 -- # : 0 00:20:56.140 16:32:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.140 16:32:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.140 16:32:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.140 16:32:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.140 16:32:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.140 16:32:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.140 16:32:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.140 16:32:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.140 16:32:05 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:56.140 16:32:05 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:56.140 16:32:05 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:56.140 16:32:05 -- host/perf.sh@17 -- # nvmftestinit 00:20:56.140 16:32:05 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:20:56.140 16:32:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.140 16:32:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:56.140 16:32:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:56.140 16:32:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:56.140 16:32:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.140 16:32:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.140 16:32:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.399 16:32:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:56.399 16:32:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:56.399 16:32:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:56.399 16:32:05 -- common/autotest_common.sh@10 -- # set +x 00:21:02.972 16:32:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:02.973 16:32:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.973 16:32:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.973 16:32:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.973 16:32:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.973 16:32:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.973 16:32:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.973 16:32:11 -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.973 16:32:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.973 16:32:11 -- nvmf/common.sh@296 -- # e810=() 00:21:02.973 16:32:11 -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.973 16:32:11 -- nvmf/common.sh@297 -- # x722=() 00:21:02.973 16:32:11 -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.973 16:32:11 -- nvmf/common.sh@298 -- # mlx=() 00:21:02.973 16:32:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.973 16:32:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.973 16:32:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.973 16:32:11 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:02.973 16:32:11 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:02.973 16:32:11 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:02.973 16:32:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.973 16:32:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:21:02.973 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:21:02.973 16:32:11 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:02.973 16:32:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:21:02.973 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:21:02.973 16:32:11 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:02.973 16:32:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.973 16:32:11 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.973 16:32:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:02.973 16:32:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.973 16:32:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:02.973 Found net devices under 0000:18:00.0: mlx_0_0 00:21:02.973 16:32:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.973 16:32:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.973 16:32:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:02.973 16:32:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.973 16:32:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:02.973 Found net devices under 0000:18:00.1: mlx_0_1 00:21:02.973 16:32:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.973 16:32:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:02.973 16:32:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:02.973 16:32:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:02.973 16:32:11 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:02.973 16:32:11 -- nvmf/common.sh@58 -- # uname 00:21:02.973 16:32:11 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:02.973 16:32:11 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:02.973 16:32:11 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:02.973 16:32:11 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:02.973 16:32:11 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:02.973 16:32:11 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:02.973 16:32:11 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:02.973 16:32:11 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:02.973 16:32:11 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:02.973 16:32:11 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:02.973 16:32:11 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:02.973 16:32:11 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:02.973 16:32:11 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:02.973 16:32:11 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:02.973 16:32:11 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:02.973 16:32:11 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:02.973 16:32:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:02.973 16:32:11 -- nvmf/common.sh@105 -- # continue 2 00:21:02.973 16:32:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:02.973 16:32:11 -- nvmf/common.sh@105 -- # continue 2 00:21:02.973 16:32:11 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:02.973 16:32:11 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:02.973 16:32:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:02.973 16:32:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:02.973 16:32:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:02.973 16:32:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:02.973 16:32:11 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:02.973 16:32:11 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:02.973 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:02.973 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:21:02.973 altname enp24s0f0np0 00:21:02.973 altname ens785f0np0 00:21:02.973 inet 192.168.100.8/24 scope global mlx_0_0 00:21:02.973 valid_lft forever preferred_lft forever 00:21:02.973 16:32:11 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:02.973 16:32:11 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:02.973 16:32:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:02.973 16:32:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:02.973 16:32:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:02.973 16:32:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:02.973 16:32:11 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:02.973 16:32:11 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:02.973 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:02.973 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:21:02.973 altname enp24s0f1np1 00:21:02.973 altname ens785f1np1 00:21:02.973 inet 192.168.100.9/24 scope global mlx_0_1 00:21:02.973 valid_lft forever preferred_lft forever 00:21:02.973 16:32:11 -- nvmf/common.sh@411 -- # return 0 00:21:02.973 16:32:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:02.973 16:32:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:02.973 16:32:11 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:02.973 16:32:11 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:02.973 16:32:11 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:02.973 16:32:11 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:02.973 16:32:11 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:02.973 16:32:11 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:02.973 16:32:11 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:02.973 16:32:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:02.973 16:32:11 -- nvmf/common.sh@105 -- # continue 2 00:21:02.973 16:32:11 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:02.973 16:32:11 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:02.973 16:32:11 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:02.973 16:32:11 -- nvmf/common.sh@105 -- # continue 2 00:21:02.973 16:32:11 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:02.973 16:32:11 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:02.974 16:32:11 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:02.974 16:32:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:02.974 16:32:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:02.974 16:32:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:02.974 16:32:11 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:02.974 16:32:11 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:02.974 16:32:11 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:02.974 16:32:11 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:02.974 16:32:11 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:02.974 16:32:11 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:02.974 16:32:11 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:02.974 192.168.100.9' 00:21:02.974 16:32:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:02.974 192.168.100.9' 00:21:02.974 16:32:11 -- nvmf/common.sh@446 -- # head -n 1 00:21:02.974 16:32:11 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:02.974 16:32:11 -- nvmf/common.sh@447 -- # head -n 1 00:21:02.974 16:32:11 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:02.974 192.168.100.9' 00:21:02.974 16:32:11 -- nvmf/common.sh@447 -- # tail -n +2 00:21:02.974 16:32:11 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:02.974 16:32:11 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:02.974 16:32:11 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:02.974 16:32:11 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:02.974 16:32:11 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:02.974 16:32:11 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:02.974 16:32:11 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:02.974 16:32:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:02.974 16:32:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:02.974 16:32:11 -- common/autotest_common.sh@10 -- # set +x 00:21:02.974 16:32:11 -- nvmf/common.sh@470 -- # nvmfpid=532266 00:21:02.974 16:32:11 -- nvmf/common.sh@471 -- # waitforlisten 532266 00:21:02.974 16:32:11 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:02.974 16:32:11 -- common/autotest_common.sh@817 -- # '[' -z 532266 ']' 00:21:02.974 16:32:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.974 16:32:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:02.974 16:32:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.974 16:32:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:02.974 16:32:11 -- common/autotest_common.sh@10 -- # set +x 00:21:02.974 [2024-04-26 16:32:11.691421] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:21:02.974 [2024-04-26 16:32:11.691473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.974 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.974 [2024-04-26 16:32:11.763604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.974 [2024-04-26 16:32:11.846609] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.974 [2024-04-26 16:32:11.846649] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.974 [2024-04-26 16:32:11.846659] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.974 [2024-04-26 16:32:11.846668] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.974 [2024-04-26 16:32:11.846677] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.974 [2024-04-26 16:32:11.846725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.974 [2024-04-26 16:32:11.846810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.974 [2024-04-26 16:32:11.846830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.974 [2024-04-26 16:32:11.846831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.543 16:32:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:03.543 16:32:12 -- common/autotest_common.sh@850 -- # return 0 00:21:03.543 16:32:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:03.543 16:32:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:03.543 16:32:12 -- common/autotest_common.sh@10 -- # set +x 00:21:03.543 16:32:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.543 16:32:12 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:03.543 16:32:12 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:04.920 16:32:13 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:04.920 16:32:13 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:04.920 16:32:13 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:04.920 16:32:13 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:05.179 16:32:14 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:05.179 16:32:14 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:05.179 16:32:14 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:05.179 16:32:14 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:21:05.179 16:32:14 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:21:05.438 [2024-04-26 16:32:14.251129] rdma.c:2712:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:21:05.438 [2024-04-26 16:32:14.270885] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f33d60/0x1f41300) succeed. 00:21:05.438 [2024-04-26 16:32:14.281468] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f353a0/0x1ea1220) succeed. 00:21:05.438 16:32:14 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.697 16:32:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:05.697 16:32:14 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.955 16:32:14 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:05.955 16:32:14 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:05.955 16:32:14 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:06.213 [2024-04-26 16:32:15.115590] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:06.213 16:32:15 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:06.470 16:32:15 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:21:06.470 16:32:15 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:06.470 16:32:15 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:06.470 16:32:15 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:21:07.843 Initializing NVMe Controllers 00:21:07.843 Attached to NVMe Controller at 0000:5e:00.0 [144d:a80a] 00:21:07.843 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:21:07.843 Initialization complete. Launching workers. 00:21:07.843 ======================================================== 00:21:07.843 Latency(us) 00:21:07.843 Device Information : IOPS MiB/s Average min max 00:21:07.843 PCIE (0000:5e:00.0) NSID 1 from core 0: 97120.61 379.38 329.06 59.61 4413.27 00:21:07.843 ======================================================== 00:21:07.843 Total : 97120.61 379.38 329.06 59.61 4413.27 00:21:07.843 00:21:07.843 16:32:16 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:07.843 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.125 Initializing NVMe Controllers 00:21:11.125 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.125 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:11.125 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:11.125 Initialization complete. Launching workers. 00:21:11.125 ======================================================== 00:21:11.125 Latency(us) 00:21:11.125 Device Information : IOPS MiB/s Average min max 00:21:11.125 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6862.00 26.80 145.52 49.92 4245.50 00:21:11.125 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5182.00 20.24 192.77 75.56 4199.60 00:21:11.125 ======================================================== 00:21:11.125 Total : 12044.00 47.05 165.85 49.92 4245.50 00:21:11.125 00:21:11.125 16:32:19 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:11.125 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.421 Initializing NVMe Controllers 00:21:14.421 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.422 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:14.422 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:14.422 Initialization complete. Launching workers. 00:21:14.422 ======================================================== 00:21:14.422 Latency(us) 00:21:14.422 Device Information : IOPS MiB/s Average min max 00:21:14.422 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18276.00 71.39 1750.87 476.85 5538.74 00:21:14.422 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.28 4926.89 10906.48 00:21:14.422 ======================================================== 00:21:14.422 Total : 22308.00 87.14 2875.16 476.85 10906.48 00:21:14.422 00:21:14.422 16:32:23 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:21:14.422 16:32:23 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:14.422 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.694 Initializing NVMe Controllers 00:21:19.694 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.694 Controller IO queue size 128, less than required. 00:21:19.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.694 Controller IO queue size 128, less than required. 00:21:19.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.694 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:19.694 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:19.694 Initialization complete. Launching workers. 00:21:19.694 ======================================================== 00:21:19.694 Latency(us) 00:21:19.694 Device Information : IOPS MiB/s Average min max 00:21:19.694 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3467.10 866.77 36978.27 15972.99 78085.96 00:21:19.694 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3593.36 898.34 35359.23 15153.21 56299.23 00:21:19.694 ======================================================== 00:21:19.694 Total : 7060.46 1765.11 36154.27 15153.21 78085.96 00:21:19.694 00:21:19.694 16:32:27 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:21:19.694 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.694 No valid NVMe controllers or AIO or URING devices found 00:21:19.694 Initializing NVMe Controllers 00:21:19.694 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.694 Controller IO queue size 128, less than required. 00:21:19.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.694 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:19.694 Controller IO queue size 128, less than required. 00:21:19.694 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:19.694 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:19.694 WARNING: Some requested NVMe devices were skipped 00:21:19.694 16:32:28 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:21:19.694 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.886 Initializing NVMe Controllers 00:21:23.886 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.886 Controller IO queue size 128, less than required. 00:21:23.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.886 Controller IO queue size 128, less than required. 00:21:23.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:23.886 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.886 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:23.886 Initialization complete. Launching workers. 00:21:23.886 00:21:23.886 ==================== 00:21:23.886 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:23.886 RDMA transport: 00:21:23.886 dev name: mlx5_0 00:21:23.886 polls: 396628 00:21:23.886 idle_polls: 393471 00:21:23.886 completions: 43486 00:21:23.886 queued_requests: 1 00:21:23.886 total_send_wrs: 21743 00:21:23.886 send_doorbell_updates: 2910 00:21:23.886 total_recv_wrs: 21870 00:21:23.886 recv_doorbell_updates: 2913 00:21:23.886 --------------------------------- 00:21:23.886 00:21:23.886 ==================== 00:21:23.886 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:23.886 RDMA transport: 00:21:23.886 dev name: mlx5_0 00:21:23.886 polls: 397967 00:21:23.886 idle_polls: 397699 00:21:23.886 completions: 19926 00:21:23.886 queued_requests: 1 00:21:23.886 total_send_wrs: 9963 00:21:23.886 send_doorbell_updates: 252 00:21:23.886 total_recv_wrs: 10090 00:21:23.886 recv_doorbell_updates: 253 00:21:23.886 --------------------------------- 00:21:23.886 ======================================================== 00:21:23.886 Latency(us) 00:21:23.886 Device Information : IOPS MiB/s Average min max 00:21:23.886 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5435.50 1358.87 23607.63 11758.72 62736.66 00:21:23.886 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2490.50 622.62 51173.20 31660.46 77050.19 00:21:23.886 ======================================================== 00:21:23.886 Total : 7926.00 1981.50 32269.26 11758.72 77050.19 00:21:23.886 00:21:23.886 16:32:32 -- host/perf.sh@66 -- # sync 00:21:23.886 16:32:32 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.886 16:32:32 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:23.886 16:32:32 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:23.886 16:32:32 -- host/perf.sh@114 -- # nvmftestfini 00:21:23.886 16:32:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:23.886 16:32:32 -- nvmf/common.sh@117 -- # sync 00:21:23.886 16:32:32 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:23.886 16:32:32 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:23.886 16:32:32 -- nvmf/common.sh@120 -- # set +e 00:21:23.886 16:32:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.886 16:32:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:23.886 rmmod nvme_rdma 00:21:23.886 rmmod nvme_fabrics 00:21:23.886 16:32:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.886 16:32:32 -- nvmf/common.sh@124 -- # set -e 00:21:23.886 16:32:32 -- nvmf/common.sh@125 -- # return 0 00:21:23.886 16:32:32 -- nvmf/common.sh@478 -- # '[' -n 532266 ']' 00:21:23.886 16:32:32 -- nvmf/common.sh@479 -- # killprocess 532266 00:21:23.886 16:32:32 -- common/autotest_common.sh@936 -- # '[' -z 532266 ']' 00:21:23.886 16:32:32 -- common/autotest_common.sh@940 -- # kill -0 532266 00:21:23.886 16:32:32 -- common/autotest_common.sh@941 -- # uname 00:21:23.886 16:32:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.886 16:32:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 532266 00:21:23.886 16:32:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:23.886 16:32:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:23.886 16:32:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 532266' 00:21:23.886 killing process with pid 532266 00:21:23.886 16:32:32 -- common/autotest_common.sh@955 -- # kill 532266 00:21:23.886 16:32:32 -- common/autotest_common.sh@960 -- # wait 532266 00:21:23.886 [2024-04-26 16:32:32.699446] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:25.789 16:32:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:25.789 16:32:34 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:25.789 00:21:25.789 real 0m29.765s 00:21:25.789 user 1m33.812s 00:21:25.789 sys 0m6.415s 00:21:25.789 16:32:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:25.789 16:32:34 -- common/autotest_common.sh@10 -- # set +x 00:21:25.789 ************************************ 00:21:25.789 END TEST nvmf_perf 00:21:25.789 ************************************ 00:21:26.048 16:32:34 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:26.048 16:32:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:26.048 16:32:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:26.048 16:32:34 -- common/autotest_common.sh@10 -- # set +x 00:21:26.048 ************************************ 00:21:26.048 START TEST nvmf_fio_host 00:21:26.048 ************************************ 00:21:26.048 16:32:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:26.307 * Looking for test storage... 00:21:26.307 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:26.307 16:32:35 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:26.307 16:32:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.307 16:32:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.307 16:32:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.307 16:32:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.307 16:32:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.308 16:32:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.308 16:32:35 -- paths/export.sh@5 -- # export PATH 00:21:26.308 16:32:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.308 16:32:35 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.308 16:32:35 -- nvmf/common.sh@7 -- # uname -s 00:21:26.308 16:32:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.308 16:32:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.308 16:32:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.308 16:32:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.308 16:32:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.308 16:32:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.308 16:32:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.308 16:32:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.308 16:32:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.308 16:32:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.308 16:32:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:21:26.308 16:32:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:21:26.308 16:32:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.308 16:32:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.308 16:32:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.308 16:32:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.308 16:32:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:26.308 16:32:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.308 16:32:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.308 16:32:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.308 16:32:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.308 16:32:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.308 16:32:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.308 16:32:35 -- paths/export.sh@5 -- # export PATH 00:21:26.308 16:32:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.308 16:32:35 -- nvmf/common.sh@47 -- # : 0 00:21:26.308 16:32:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.308 16:32:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.308 16:32:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.308 16:32:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.308 16:32:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.308 16:32:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.308 16:32:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.308 16:32:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.308 16:32:35 -- host/fio.sh@12 -- # nvmftestinit 00:21:26.308 16:32:35 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:26.308 16:32:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.308 16:32:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:26.308 16:32:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:26.308 16:32:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:26.308 16:32:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.308 16:32:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.308 16:32:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.308 16:32:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:26.308 16:32:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:26.308 16:32:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.308 16:32:35 -- common/autotest_common.sh@10 -- # set +x 00:21:32.880 16:32:40 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:32.880 16:32:40 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.880 16:32:40 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.880 16:32:40 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.880 16:32:40 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.880 16:32:40 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.880 16:32:40 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.880 16:32:40 -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.880 16:32:40 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.880 16:32:40 -- nvmf/common.sh@296 -- # e810=() 00:21:32.880 16:32:40 -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.880 16:32:40 -- nvmf/common.sh@297 -- # x722=() 00:21:32.880 16:32:40 -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.880 16:32:40 -- nvmf/common.sh@298 -- # mlx=() 00:21:32.880 16:32:40 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.880 16:32:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.880 16:32:40 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.880 16:32:40 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:32.880 16:32:40 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:32.880 16:32:40 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:32.880 16:32:40 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.880 16:32:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.880 16:32:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:21:32.880 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:21:32.880 16:32:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.880 16:32:40 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.880 16:32:40 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:21:32.880 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:21:32.880 16:32:40 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.880 16:32:40 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.880 16:32:40 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.880 16:32:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.880 16:32:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:32.880 16:32:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.880 16:32:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:32.880 Found net devices under 0000:18:00.0: mlx_0_0 00:21:32.880 16:32:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.880 16:32:40 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.880 16:32:40 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.880 16:32:40 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:32.880 16:32:40 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.880 16:32:40 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:32.880 Found net devices under 0000:18:00.1: mlx_0_1 00:21:32.880 16:32:40 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.880 16:32:40 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:32.880 16:32:40 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:32.880 16:32:40 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:32.880 16:32:40 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:32.880 16:32:40 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:32.880 16:32:40 -- nvmf/common.sh@58 -- # uname 00:21:32.880 16:32:40 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:32.880 16:32:40 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:32.880 16:32:40 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:32.880 16:32:40 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:32.880 16:32:40 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:32.880 16:32:40 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:32.880 16:32:40 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:32.880 16:32:40 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:32.880 16:32:40 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:32.880 16:32:40 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:32.880 16:32:40 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:32.880 16:32:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.880 16:32:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:32.880 16:32:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:32.881 16:32:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.881 16:32:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:32.881 16:32:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:32.881 16:32:40 -- nvmf/common.sh@105 -- # continue 2 00:21:32.881 16:32:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:32.881 16:32:40 -- nvmf/common.sh@105 -- # continue 2 00:21:32.881 16:32:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:32.881 16:32:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:32.881 16:32:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.881 16:32:40 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:32.881 16:32:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:32.881 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:32.881 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:21:32.881 altname enp24s0f0np0 00:21:32.881 altname ens785f0np0 00:21:32.881 inet 192.168.100.8/24 scope global mlx_0_0 00:21:32.881 valid_lft forever preferred_lft forever 00:21:32.881 16:32:40 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:32.881 16:32:40 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:32.881 16:32:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.881 16:32:40 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:32.881 16:32:40 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:32.881 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:32.881 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:21:32.881 altname enp24s0f1np1 00:21:32.881 altname ens785f1np1 00:21:32.881 inet 192.168.100.9/24 scope global mlx_0_1 00:21:32.881 valid_lft forever preferred_lft forever 00:21:32.881 16:32:40 -- nvmf/common.sh@411 -- # return 0 00:21:32.881 16:32:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:32.881 16:32:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:32.881 16:32:40 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:32.881 16:32:40 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:32.881 16:32:40 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.881 16:32:40 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:32.881 16:32:40 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:32.881 16:32:40 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.881 16:32:40 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:32.881 16:32:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:32.881 16:32:40 -- nvmf/common.sh@105 -- # continue 2 00:21:32.881 16:32:40 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.881 16:32:40 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:32.881 16:32:40 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:32.881 16:32:40 -- nvmf/common.sh@105 -- # continue 2 00:21:32.881 16:32:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:32.881 16:32:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:32.881 16:32:40 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.881 16:32:40 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:32.881 16:32:40 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:32.881 16:32:40 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.881 16:32:40 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.881 16:32:40 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:32.881 192.168.100.9' 00:21:32.881 16:32:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:32.881 192.168.100.9' 00:21:32.881 16:32:40 -- nvmf/common.sh@446 -- # head -n 1 00:21:32.881 16:32:40 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:32.881 16:32:40 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:32.881 192.168.100.9' 00:21:32.881 16:32:40 -- nvmf/common.sh@447 -- # tail -n +2 00:21:32.881 16:32:40 -- nvmf/common.sh@447 -- # head -n 1 00:21:32.881 16:32:40 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:32.881 16:32:40 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:32.881 16:32:40 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:32.881 16:32:40 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:32.881 16:32:40 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:32.881 16:32:40 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:32.881 16:32:40 -- host/fio.sh@14 -- # [[ y != y ]] 00:21:32.881 16:32:40 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:21:32.881 16:32:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:32.881 16:32:40 -- common/autotest_common.sh@10 -- # set +x 00:21:32.881 16:32:40 -- host/fio.sh@22 -- # nvmfpid=538168 00:21:32.881 16:32:40 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.881 16:32:40 -- host/fio.sh@26 -- # waitforlisten 538168 00:21:32.881 16:32:40 -- host/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:32.881 16:32:40 -- common/autotest_common.sh@817 -- # '[' -z 538168 ']' 00:21:32.881 16:32:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.881 16:32:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:32.881 16:32:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.881 16:32:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:32.881 16:32:40 -- common/autotest_common.sh@10 -- # set +x 00:21:32.881 [2024-04-26 16:32:41.027164] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:21:32.881 [2024-04-26 16:32:41.027222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.881 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.881 [2024-04-26 16:32:41.099242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:32.881 [2024-04-26 16:32:41.181907] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.881 [2024-04-26 16:32:41.181947] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.881 [2024-04-26 16:32:41.181958] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.881 [2024-04-26 16:32:41.181967] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.881 [2024-04-26 16:32:41.181975] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.881 [2024-04-26 16:32:41.182028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.881 [2024-04-26 16:32:41.182045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.881 [2024-04-26 16:32:41.182124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:32.881 [2024-04-26 16:32:41.182126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.881 16:32:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:32.881 16:32:41 -- common/autotest_common.sh@850 -- # return 0 00:21:32.881 16:32:41 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:32.881 16:32:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:32.881 16:32:41 -- common/autotest_common.sh@10 -- # set +x 00:21:32.881 [2024-04-26 16:32:41.893119] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2022310/0x2026800) succeed. 00:21:32.881 [2024-04-26 16:32:41.903424] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2023950/0x2067e90) succeed. 00:21:33.140 16:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.140 16:32:42 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:21:33.140 16:32:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:33.140 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:33.140 16:32:42 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:33.140 16:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.140 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:33.140 Malloc1 00:21:33.140 16:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.140 16:32:42 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.140 16:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.140 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:33.140 16:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.140 16:32:42 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:33.140 16:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.140 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:33.140 16:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.140 16:32:42 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:33.140 16:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.140 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:33.140 [2024-04-26 16:32:42.124963] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:33.140 16:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.140 16:32:42 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:33.140 16:32:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:33.140 16:32:42 -- common/autotest_common.sh@10 -- # set +x 00:21:33.140 16:32:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:33.140 16:32:42 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:33.140 16:32:42 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:33.140 16:32:42 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:33.140 16:32:42 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:33.140 16:32:42 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:33.140 16:32:42 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:33.140 16:32:42 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:33.140 16:32:42 -- common/autotest_common.sh@1327 -- # shift 00:21:33.140 16:32:42 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:33.140 16:32:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:33.140 16:32:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:33.140 16:32:42 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:33.140 16:32:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:33.397 16:32:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:33.397 16:32:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:33.397 16:32:42 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:33.397 16:32:42 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:33.397 16:32:42 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:33.397 16:32:42 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:33.397 16:32:42 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:33.397 16:32:42 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:33.397 16:32:42 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:33.397 16:32:42 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:33.661 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:33.661 fio-3.35 00:21:33.661 Starting 1 thread 00:21:33.661 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.185 00:21:36.185 test: (groupid=0, jobs=1): err= 0: pid=538464: Fri Apr 26 16:32:44 2024 00:21:36.185 read: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(138MiB/2004msec) 00:21:36.185 slat (nsec): min=1391, max=35919, avg=1521.91, stdev=466.36 00:21:36.185 clat (usec): min=1935, max=6688, avg=3603.43, stdev=78.99 00:21:36.185 lat (usec): min=1949, max=6689, avg=3604.95, stdev=78.92 00:21:36.185 clat percentiles (usec): 00:21:36.185 | 1.00th=[ 3556], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:21:36.185 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3621], 00:21:36.185 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3621], 00:21:36.185 | 99.00th=[ 3654], 99.50th=[ 3720], 99.90th=[ 4424], 99.95th=[ 5735], 00:21:36.185 | 99.99th=[ 6259] 00:21:36.185 bw ( KiB/s): min=69032, max=71120, per=100.00%, avg=70552.00, stdev=1016.16, samples=4 00:21:36.185 iops : min=17258, max=17780, avg=17638.00, stdev=254.04, samples=4 00:21:36.185 write: IOPS=17.6k, BW=68.9MiB/s (72.2MB/s)(138MiB/2004msec); 0 zone resets 00:21:36.185 slat (nsec): min=1446, max=18539, avg=1838.54, stdev=505.22 00:21:36.185 clat (usec): min=2727, max=6677, avg=3601.57, stdev=83.22 00:21:36.185 lat (usec): min=2739, max=6679, avg=3603.41, stdev=83.16 00:21:36.185 clat percentiles (usec): 00:21:36.185 | 1.00th=[ 3556], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3589], 00:21:36.185 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3589], 60.00th=[ 3589], 00:21:36.185 | 70.00th=[ 3621], 80.00th=[ 3621], 90.00th=[ 3621], 95.00th=[ 3621], 00:21:36.185 | 99.00th=[ 3654], 99.50th=[ 3720], 99.90th=[ 4883], 99.95th=[ 5800], 00:21:36.185 | 99.99th=[ 6652] 00:21:36.185 bw ( KiB/s): min=68992, max=71168, per=100.00%, avg=70564.00, stdev=1049.71, samples=4 00:21:36.186 iops : min=17248, max=17792, avg=17641.00, stdev=262.43, samples=4 00:21:36.186 lat (msec) : 2=0.01%, 4=99.86%, 10=0.14% 00:21:36.186 cpu : usr=99.35%, sys=0.25%, ctx=16, majf=0, minf=3 00:21:36.186 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:36.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:36.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:36.186 issued rwts: total=35343,35342,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:36.186 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:36.186 00:21:36.186 Run status group 0 (all jobs): 00:21:36.186 READ: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=138MiB (145MB), run=2004-2004msec 00:21:36.186 WRITE: bw=68.9MiB/s (72.2MB/s), 68.9MiB/s-68.9MiB/s (72.2MB/s-72.2MB/s), io=138MiB (145MB), run=2004-2004msec 00:21:36.186 16:32:44 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:36.186 16:32:44 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:36.186 16:32:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:36.186 16:32:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:36.186 16:32:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:36.186 16:32:44 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:36.186 16:32:44 -- common/autotest_common.sh@1327 -- # shift 00:21:36.186 16:32:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:36.186 16:32:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.186 16:32:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:36.186 16:32:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:36.186 16:32:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:36.186 16:32:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:36.186 16:32:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:36.186 16:32:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:36.186 16:32:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:36.186 16:32:44 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:21:36.186 16:32:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:36.186 16:32:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:21:36.186 16:32:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:21:36.186 16:32:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:36.186 16:32:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:36.186 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:36.186 fio-3.35 00:21:36.186 Starting 1 thread 00:21:36.186 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.758 00:21:38.758 test: (groupid=0, jobs=1): err= 0: pid=538915: Fri Apr 26 16:32:47 2024 00:21:38.758 read: IOPS=13.1k, BW=204MiB/s (214MB/s)(404MiB/1978msec) 00:21:38.758 slat (nsec): min=2305, max=50470, avg=2625.50, stdev=1089.48 00:21:38.758 clat (usec): min=265, max=9324, avg=1777.87, stdev=1116.09 00:21:38.758 lat (usec): min=268, max=9330, avg=1780.49, stdev=1116.51 00:21:38.758 clat percentiles (usec): 00:21:38.758 | 1.00th=[ 611], 5.00th=[ 840], 10.00th=[ 971], 20.00th=[ 1123], 00:21:38.758 | 30.00th=[ 1254], 40.00th=[ 1352], 50.00th=[ 1483], 60.00th=[ 1631], 00:21:38.758 | 70.00th=[ 1795], 80.00th=[ 2040], 90.00th=[ 2573], 95.00th=[ 4883], 00:21:38.758 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8291], 99.95th=[ 9241], 00:21:38.758 | 99.99th=[ 9372] 00:21:38.758 bw ( KiB/s): min=100384, max=106336, per=49.45%, avg=103392.00, stdev=2645.38, samples=4 00:21:38.758 iops : min= 6274, max= 6646, avg=6462.00, stdev=165.34, samples=4 00:21:38.758 write: IOPS=7226, BW=113MiB/s (118MB/s)(210MiB/1858msec); 0 zone resets 00:21:38.758 slat (usec): min=27, max=128, avg=30.00, stdev= 6.29 00:21:38.758 clat (usec): min=4716, max=22792, avg=14056.75, stdev=1838.60 00:21:38.758 lat (usec): min=4744, max=22823, avg=14086.75, stdev=1838.36 00:21:38.758 clat percentiles (usec): 00:21:38.758 | 1.00th=[ 8225], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:21:38.758 | 30.00th=[13173], 40.00th=[13566], 50.00th=[13960], 60.00th=[14353], 00:21:38.758 | 70.00th=[14746], 80.00th=[15401], 90.00th=[16319], 95.00th=[17171], 00:21:38.758 | 99.00th=[19006], 99.50th=[20317], 99.90th=[21103], 99.95th=[22414], 00:21:38.758 | 99.99th=[22676] 00:21:38.758 bw ( KiB/s): min=101792, max=110208, per=92.06%, avg=106448.00, stdev=3610.05, samples=4 00:21:38.758 iops : min= 6362, max= 6888, avg=6653.00, stdev=225.63, samples=4 00:21:38.758 lat (usec) : 500=0.22%, 750=1.67%, 1000=5.81% 00:21:38.758 lat (msec) : 2=44.06%, 4=10.11%, 10=4.48%, 20=33.42%, 50=0.23% 00:21:38.758 cpu : usr=96.86%, sys=1.85%, ctx=185, majf=0, minf=2 00:21:38.758 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:38.758 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:38.758 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:38.758 issued rwts: total=25846,13427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:38.758 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:38.758 00:21:38.758 Run status group 0 (all jobs): 00:21:38.758 READ: bw=204MiB/s (214MB/s), 204MiB/s-204MiB/s (214MB/s-214MB/s), io=404MiB (423MB), run=1978-1978msec 00:21:38.758 WRITE: bw=113MiB/s (118MB/s), 113MiB/s-113MiB/s (118MB/s-118MB/s), io=210MiB (220MB), run=1858-1858msec 00:21:38.758 16:32:47 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.758 16:32:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:38.758 16:32:47 -- common/autotest_common.sh@10 -- # set +x 00:21:38.758 16:32:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:38.758 16:32:47 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:21:38.758 16:32:47 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:38.758 16:32:47 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:21:38.758 16:32:47 -- host/fio.sh@84 -- # nvmftestfini 00:21:38.758 16:32:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:38.758 16:32:47 -- nvmf/common.sh@117 -- # sync 00:21:38.758 16:32:47 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:38.758 16:32:47 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:38.758 16:32:47 -- nvmf/common.sh@120 -- # set +e 00:21:38.758 16:32:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.758 16:32:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:38.758 rmmod nvme_rdma 00:21:38.758 rmmod nvme_fabrics 00:21:38.758 16:32:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.758 16:32:47 -- nvmf/common.sh@124 -- # set -e 00:21:38.758 16:32:47 -- nvmf/common.sh@125 -- # return 0 00:21:38.758 16:32:47 -- nvmf/common.sh@478 -- # '[' -n 538168 ']' 00:21:38.758 16:32:47 -- nvmf/common.sh@479 -- # killprocess 538168 00:21:38.758 16:32:47 -- common/autotest_common.sh@936 -- # '[' -z 538168 ']' 00:21:38.758 16:32:47 -- common/autotest_common.sh@940 -- # kill -0 538168 00:21:38.758 16:32:47 -- common/autotest_common.sh@941 -- # uname 00:21:38.758 16:32:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:38.758 16:32:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 538168 00:21:38.758 16:32:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:38.758 16:32:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:38.758 16:32:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 538168' 00:21:38.758 killing process with pid 538168 00:21:38.758 16:32:47 -- common/autotest_common.sh@955 -- # kill 538168 00:21:38.758 16:32:47 -- common/autotest_common.sh@960 -- # wait 538168 00:21:38.758 [2024-04-26 16:32:47.656353] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:21:39.017 16:32:47 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:39.017 16:32:47 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:21:39.017 00:21:39.017 real 0m12.903s 00:21:39.017 user 0m37.854s 00:21:39.017 sys 0m5.277s 00:21:39.017 16:32:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:39.017 16:32:47 -- common/autotest_common.sh@10 -- # set +x 00:21:39.017 ************************************ 00:21:39.017 END TEST nvmf_fio_host 00:21:39.017 ************************************ 00:21:39.017 16:32:47 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:39.017 16:32:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:39.017 16:32:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:39.017 16:32:47 -- common/autotest_common.sh@10 -- # set +x 00:21:39.276 ************************************ 00:21:39.276 START TEST nvmf_failover 00:21:39.276 ************************************ 00:21:39.276 16:32:48 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:39.276 * Looking for test storage... 00:21:39.276 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:39.276 16:32:48 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.276 16:32:48 -- nvmf/common.sh@7 -- # uname -s 00:21:39.276 16:32:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.276 16:32:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.276 16:32:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.276 16:32:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.276 16:32:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.276 16:32:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.276 16:32:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.276 16:32:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.276 16:32:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.276 16:32:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.276 16:32:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:21:39.276 16:32:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:21:39.276 16:32:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.276 16:32:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.276 16:32:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.276 16:32:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.276 16:32:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:39.276 16:32:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.276 16:32:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.276 16:32:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.276 16:32:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.276 16:32:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.276 16:32:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.276 16:32:48 -- paths/export.sh@5 -- # export PATH 00:21:39.276 16:32:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.276 16:32:48 -- nvmf/common.sh@47 -- # : 0 00:21:39.276 16:32:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.276 16:32:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.276 16:32:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.276 16:32:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.276 16:32:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.276 16:32:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.276 16:32:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.276 16:32:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.276 16:32:48 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.276 16:32:48 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.276 16:32:48 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:39.276 16:32:48 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:39.276 16:32:48 -- host/failover.sh@18 -- # nvmftestinit 00:21:39.276 16:32:48 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:21:39.276 16:32:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.276 16:32:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:39.276 16:32:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:39.276 16:32:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:39.276 16:32:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.276 16:32:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.276 16:32:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.276 16:32:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:39.276 16:32:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:39.276 16:32:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.276 16:32:48 -- common/autotest_common.sh@10 -- # set +x 00:21:45.840 16:32:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:45.840 16:32:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.840 16:32:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.840 16:32:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.840 16:32:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.840 16:32:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.840 16:32:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.840 16:32:54 -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.840 16:32:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.840 16:32:54 -- nvmf/common.sh@296 -- # e810=() 00:21:45.840 16:32:54 -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.840 16:32:54 -- nvmf/common.sh@297 -- # x722=() 00:21:45.840 16:32:54 -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.840 16:32:54 -- nvmf/common.sh@298 -- # mlx=() 00:21:45.840 16:32:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.840 16:32:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.840 16:32:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.840 16:32:54 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:45.840 16:32:54 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:45.840 16:32:54 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:45.840 16:32:54 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:45.840 16:32:54 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:45.840 16:32:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.840 16:32:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.840 16:32:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:21:45.840 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:21:45.840 16:32:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:45.840 16:32:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:45.840 16:32:54 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:45.841 16:32:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:21:45.841 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:21:45.841 16:32:54 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:45.841 16:32:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.841 16:32:54 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.841 16:32:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:45.841 16:32:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.841 16:32:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:45.841 Found net devices under 0000:18:00.0: mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.841 16:32:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.841 16:32:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:45.841 16:32:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.841 16:32:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:45.841 Found net devices under 0000:18:00.1: mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.841 16:32:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:45.841 16:32:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:45.841 16:32:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@409 -- # rdma_device_init 00:21:45.841 16:32:54 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:21:45.841 16:32:54 -- nvmf/common.sh@58 -- # uname 00:21:45.841 16:32:54 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:45.841 16:32:54 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:45.841 16:32:54 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:45.841 16:32:54 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:45.841 16:32:54 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:45.841 16:32:54 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:45.841 16:32:54 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:45.841 16:32:54 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:45.841 16:32:54 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:21:45.841 16:32:54 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:45.841 16:32:54 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:45.841 16:32:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:45.841 16:32:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:45.841 16:32:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:45.841 16:32:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:45.841 16:32:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:45.841 16:32:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@105 -- # continue 2 00:21:45.841 16:32:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@105 -- # continue 2 00:21:45.841 16:32:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:45.841 16:32:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:45.841 16:32:54 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:45.841 16:32:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:45.841 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:45.841 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:21:45.841 altname enp24s0f0np0 00:21:45.841 altname ens785f0np0 00:21:45.841 inet 192.168.100.8/24 scope global mlx_0_0 00:21:45.841 valid_lft forever preferred_lft forever 00:21:45.841 16:32:54 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:45.841 16:32:54 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:45.841 16:32:54 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:45.841 16:32:54 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:45.841 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:45.841 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:21:45.841 altname enp24s0f1np1 00:21:45.841 altname ens785f1np1 00:21:45.841 inet 192.168.100.9/24 scope global mlx_0_1 00:21:45.841 valid_lft forever preferred_lft forever 00:21:45.841 16:32:54 -- nvmf/common.sh@411 -- # return 0 00:21:45.841 16:32:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:45.841 16:32:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:45.841 16:32:54 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:21:45.841 16:32:54 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:45.841 16:32:54 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:45.841 16:32:54 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:45.841 16:32:54 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:45.841 16:32:54 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:45.841 16:32:54 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:45.841 16:32:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@105 -- # continue 2 00:21:45.841 16:32:54 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:45.841 16:32:54 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:45.841 16:32:54 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@105 -- # continue 2 00:21:45.841 16:32:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:45.841 16:32:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:45.841 16:32:54 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:45.841 16:32:54 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:45.841 16:32:54 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:45.841 16:32:54 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:21:45.841 192.168.100.9' 00:21:45.841 16:32:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:45.841 192.168.100.9' 00:21:45.841 16:32:54 -- nvmf/common.sh@446 -- # head -n 1 00:21:45.841 16:32:54 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:45.841 16:32:54 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:21:45.841 192.168.100.9' 00:21:45.841 16:32:54 -- nvmf/common.sh@447 -- # tail -n +2 00:21:45.841 16:32:54 -- nvmf/common.sh@447 -- # head -n 1 00:21:45.841 16:32:54 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:45.841 16:32:54 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:21:45.841 16:32:54 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:45.841 16:32:54 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:21:45.841 16:32:54 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:21:45.841 16:32:54 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:21:45.841 16:32:54 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:45.841 16:32:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:45.841 16:32:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:45.841 16:32:54 -- common/autotest_common.sh@10 -- # set +x 00:21:45.841 16:32:54 -- nvmf/common.sh@470 -- # nvmfpid=542196 00:21:45.841 16:32:54 -- nvmf/common.sh@471 -- # waitforlisten 542196 00:21:45.841 16:32:54 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:45.841 16:32:54 -- common/autotest_common.sh@817 -- # '[' -z 542196 ']' 00:21:45.841 16:32:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.841 16:32:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:45.841 16:32:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.841 16:32:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:45.841 16:32:54 -- common/autotest_common.sh@10 -- # set +x 00:21:45.841 [2024-04-26 16:32:54.497077] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:21:45.842 [2024-04-26 16:32:54.497139] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.842 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.842 [2024-04-26 16:32:54.569261] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:45.842 [2024-04-26 16:32:54.650862] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.842 [2024-04-26 16:32:54.650904] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.842 [2024-04-26 16:32:54.650914] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.842 [2024-04-26 16:32:54.650922] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.842 [2024-04-26 16:32:54.650929] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.842 [2024-04-26 16:32:54.651029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.842 [2024-04-26 16:32:54.651120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.842 [2024-04-26 16:32:54.651122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.405 16:32:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:46.405 16:32:55 -- common/autotest_common.sh@850 -- # return 0 00:21:46.405 16:32:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:46.405 16:32:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:46.405 16:32:55 -- common/autotest_common.sh@10 -- # set +x 00:21:46.405 16:32:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.405 16:32:55 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:46.664 [2024-04-26 16:32:55.524130] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x249fb30/0x24a4020) succeed. 00:21:46.664 [2024-04-26 16:32:55.534377] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24a10d0/0x24e56b0) succeed. 00:21:46.664 16:32:55 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:46.922 Malloc0 00:21:46.922 16:32:55 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.180 16:32:56 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:47.438 16:32:56 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:47.438 [2024-04-26 16:32:56.393639] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:47.438 16:32:56 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:47.696 [2024-04-26 16:32:56.578062] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:47.696 16:32:56 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:47.980 [2024-04-26 16:32:56.762779] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:47.980 16:32:56 -- host/failover.sh@31 -- # bdevperf_pid=542429 00:21:47.980 16:32:56 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:47.980 16:32:56 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.980 16:32:56 -- host/failover.sh@34 -- # waitforlisten 542429 /var/tmp/bdevperf.sock 00:21:47.980 16:32:56 -- common/autotest_common.sh@817 -- # '[' -z 542429 ']' 00:21:47.980 16:32:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.980 16:32:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:47.980 16:32:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.980 16:32:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:47.980 16:32:56 -- common/autotest_common.sh@10 -- # set +x 00:21:48.913 16:32:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:48.913 16:32:57 -- common/autotest_common.sh@850 -- # return 0 00:21:48.913 16:32:57 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:48.913 NVMe0n1 00:21:48.913 16:32:57 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:49.172 00:21:49.172 16:32:58 -- host/failover.sh@39 -- # run_test_pid=542621 00:21:49.172 16:32:58 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:49.172 16:32:58 -- host/failover.sh@41 -- # sleep 1 00:21:50.547 16:32:59 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:50.547 16:32:59 -- host/failover.sh@45 -- # sleep 3 00:21:53.828 16:33:02 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.828 00:21:53.828 16:33:02 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:53.828 16:33:02 -- host/failover.sh@50 -- # sleep 3 00:21:57.109 16:33:05 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:57.109 [2024-04-26 16:33:06.005323] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:57.109 16:33:06 -- host/failover.sh@55 -- # sleep 1 00:21:58.044 16:33:07 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:58.301 16:33:07 -- host/failover.sh@59 -- # wait 542621 00:22:04.968 0 00:22:04.968 16:33:13 -- host/failover.sh@61 -- # killprocess 542429 00:22:04.968 16:33:13 -- common/autotest_common.sh@936 -- # '[' -z 542429 ']' 00:22:04.968 16:33:13 -- common/autotest_common.sh@940 -- # kill -0 542429 00:22:04.968 16:33:13 -- common/autotest_common.sh@941 -- # uname 00:22:04.968 16:33:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:04.968 16:33:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 542429 00:22:04.968 16:33:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:04.968 16:33:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:04.968 16:33:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 542429' 00:22:04.968 killing process with pid 542429 00:22:04.968 16:33:13 -- common/autotest_common.sh@955 -- # kill 542429 00:22:04.968 16:33:13 -- common/autotest_common.sh@960 -- # wait 542429 00:22:04.968 16:33:13 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:04.968 [2024-04-26 16:32:56.835885] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:22:04.968 [2024-04-26 16:32:56.835944] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542429 ] 00:22:04.968 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.968 [2024-04-26 16:32:56.909044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.968 [2024-04-26 16:32:56.986788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.968 Running I/O for 15 seconds... 00:22:04.968 [2024-04-26 16:33:00.344265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x185e00 00:22:04.968 [2024-04-26 16:33:00.344309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.968 [2024-04-26 16:33:00.344333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x185e00 00:22:04.968 [2024-04-26 16:33:00.344343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.968 [2024-04-26 16:33:00.344359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x185e00 00:22:04.968 [2024-04-26 16:33:00.344369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.968 [2024-04-26 16:33:00.344382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x185e00 00:22:04.968 [2024-04-26 16:33:00.344392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.968 [2024-04-26 16:33:00.344404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x185e00 00:22:04.968 [2024-04-26 16:33:00.344413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.968 [2024-04-26 16:33:00.344424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x185e00 00:22:04.968 [2024-04-26 16:33:00.344433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.968 [2024-04-26 16:33:00.344445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x185e00 00:22:04.968 [2024-04-26 16:33:00.344454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.968 [2024-04-26 16:33:00.344465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x185e00 00:22:04.968 [2024-04-26 16:33:00.344475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x185e00 00:22:04.969 [2024-04-26 16:33:00.344910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.344930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.344951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.344972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.344983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.344992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.969 [2024-04-26 16:33:00.345235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.969 [2024-04-26 16:33:00.345246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.345989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.345998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.346010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.346019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.346030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.346040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.346051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.970 [2024-04-26 16:33:00.346060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.970 [2024-04-26 16:33:00.346071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.971 [2024-04-26 16:33:00.346867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.971 [2024-04-26 16:33:00.346876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:00.346887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:00.346896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:00.346907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:00.346916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:00.348143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.972 [2024-04-26 16:33:00.348157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.972 [2024-04-26 16:33:00.348166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25360 len:8 PRP1 0x0 PRP2 0x0 00:22:04.972 [2024-04-26 16:33:00.348176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:00.348219] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:22:04.972 [2024-04-26 16:33:00.348231] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:22:04.972 [2024-04-26 16:33:00.348241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.972 [2024-04-26 16:33:00.351039] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.972 [2024-04-26 16:33:00.365342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:04.972 [2024-04-26 16:33:00.414521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:04.972 [2024-04-26 16:33:03.816145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x185e00 00:22:04.972 [2024-04-26 16:33:03.816535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.972 [2024-04-26 16:33:03.816840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.972 [2024-04-26 16:33:03.816851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.816861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.816872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x185e00 00:22:04.973 [2024-04-26 16:33:03.816881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.816892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x185e00 00:22:04.973 [2024-04-26 16:33:03.816901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.816912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x185e00 00:22:04.973 [2024-04-26 16:33:03.816922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.816932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:121064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x185e00 00:22:04.973 [2024-04-26 16:33:03.816942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.816952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x185e00 00:22:04.973 [2024-04-26 16:33:03.816962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.816972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x185e00 00:22:04.973 [2024-04-26 16:33:03.816983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.816993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x185e00 00:22:04.973 [2024-04-26 16:33:03.817324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x185e00 00:22:04.973 [2024-04-26 16:33:03.817349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.973 [2024-04-26 16:33:03.817483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.973 [2024-04-26 16:33:03.817495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:121136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:121144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:121152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:121176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:121224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.817888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.817980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.974 [2024-04-26 16:33:03.818009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:121248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:121272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:121296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x185e00 00:22:04.974 [2024-04-26 16:33:03.818234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.974 [2024-04-26 16:33:03.818245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:121320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:121336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:121344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:121352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:121360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:121368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:03.818667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.818780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:03.818789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.819894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.975 [2024-04-26 16:33:03.819907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.975 [2024-04-26 16:33:03.819916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121984 len:8 PRP1 0x0 PRP2 0x0 00:22:04.975 [2024-04-26 16:33:03.819925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:03.819966] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:22:04.975 [2024-04-26 16:33:03.819978] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:22:04.975 [2024-04-26 16:33:03.819988] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.975 [2024-04-26 16:33:03.822775] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.975 [2024-04-26 16:33:03.836760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:04.975 [2024-04-26 16:33:03.878225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:04.975 [2024-04-26 16:33:08.205456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:08.205497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:08.205518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:08.205528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:08.205541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:93856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:08.205550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:08.205568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:08.205578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:08.205589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:08.205598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:08.205609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x185e00 00:22:04.975 [2024-04-26 16:33:08.205619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:08.205630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.975 [2024-04-26 16:33:08.205639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.975 [2024-04-26 16:33:08.205651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.205660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.205680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.205700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.205720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.205740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.205761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.205780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:93888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:93912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.205988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.205998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:93968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:94000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.206294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.976 [2024-04-26 16:33:08.206313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:94112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x185e00 00:22:04.976 [2024-04-26 16:33:08.206420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.976 [2024-04-26 16:33:08.206431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:94128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:94136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:94160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:94192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:94232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x185e00 00:22:04.977 [2024-04-26 16:33:08.206970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.206980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.206989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.207000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.207011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.207022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.207032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.207042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.207052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.207062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.207072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.207083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.977 [2024-04-26 16:33:08.207092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.977 [2024-04-26 16:33:08.207103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.978 [2024-04-26 16:33:08.207725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x185e00 00:22:04.978 [2024-04-26 16:33:08.207745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x185e00 00:22:04.978 [2024-04-26 16:33:08.207767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x185e00 00:22:04.978 [2024-04-26 16:33:08.207787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x185e00 00:22:04.978 [2024-04-26 16:33:08.207808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x185e00 00:22:04.978 [2024-04-26 16:33:08.207828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x185e00 00:22:04.978 [2024-04-26 16:33:08.207848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x185e00 00:22:04.978 [2024-04-26 16:33:08.207868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:94328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x185e00 00:22:04.978 [2024-04-26 16:33:08.207888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.978 [2024-04-26 16:33:08.207898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.979 [2024-04-26 16:33:08.207907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.207919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.979 [2024-04-26 16:33:08.207928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.207938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.979 [2024-04-26 16:33:08.207947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.207958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x185e00 00:22:04.979 [2024-04-26 16:33:08.207968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.207979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:94344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x185e00 00:22:04.979 [2024-04-26 16:33:08.207988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.208000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x185e00 00:22:04.979 [2024-04-26 16:33:08.208011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.208022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x185e00 00:22:04.979 [2024-04-26 16:33:08.208031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.208042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x185e00 00:22:04.979 [2024-04-26 16:33:08.208052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.208063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:94376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x185e00 00:22:04.979 [2024-04-26 16:33:08.208072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.208083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:04.979 [2024-04-26 16:33:08.208092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.209234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:04.979 [2024-04-26 16:33:08.209248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:04.979 [2024-04-26 16:33:08.209258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94856 len:8 PRP1 0x0 PRP2 0x0 00:22:04.979 [2024-04-26 16:33:08.209268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:04.979 [2024-04-26 16:33:08.209313] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4940 was disconnected and freed. reset controller. 00:22:04.979 [2024-04-26 16:33:08.209325] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:22:04.979 [2024-04-26 16:33:08.209335] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:04.979 [2024-04-26 16:33:08.212123] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.979 [2024-04-26 16:33:08.229135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:04.979 [2024-04-26 16:33:08.267970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:04.979 00:22:04.979 Latency(us) 00:22:04.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.979 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:04.979 Verification LBA range: start 0x0 length 0x4000 00:22:04.979 NVMe0n1 : 15.01 14307.73 55.89 305.82 0.00 8735.85 350.83 1021221.84 00:22:04.979 =================================================================================================================== 00:22:04.979 Total : 14307.73 55.89 305.82 0.00 8735.85 350.83 1021221.84 00:22:04.979 Received shutdown signal, test time was about 15.000000 seconds 00:22:04.979 00:22:04.979 Latency(us) 00:22:04.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.979 =================================================================================================================== 00:22:04.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.979 16:33:13 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:04.979 16:33:13 -- host/failover.sh@65 -- # count=3 00:22:04.979 16:33:13 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:04.979 16:33:13 -- host/failover.sh@73 -- # bdevperf_pid=545180 00:22:04.979 16:33:13 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:04.979 16:33:13 -- host/failover.sh@75 -- # waitforlisten 545180 /var/tmp/bdevperf.sock 00:22:04.979 16:33:13 -- common/autotest_common.sh@817 -- # '[' -z 545180 ']' 00:22:04.979 16:33:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.979 16:33:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:04.979 16:33:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.979 16:33:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:04.979 16:33:13 -- common/autotest_common.sh@10 -- # set +x 00:22:05.582 16:33:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:05.582 16:33:14 -- common/autotest_common.sh@850 -- # return 0 00:22:05.582 16:33:14 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:05.879 [2024-04-26 16:33:14.654972] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:05.879 16:33:14 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:22:05.879 [2024-04-26 16:33:14.839576] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:22:05.879 16:33:14 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.162 NVMe0n1 00:22:06.162 16:33:15 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.420 00:22:06.420 16:33:15 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.679 00:22:06.679 16:33:15 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:06.679 16:33:15 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:06.937 16:33:15 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:07.194 16:33:16 -- host/failover.sh@87 -- # sleep 3 00:22:10.513 16:33:19 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.513 16:33:19 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:10.513 16:33:19 -- host/failover.sh@90 -- # run_test_pid=545959 00:22:10.513 16:33:19 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.513 16:33:19 -- host/failover.sh@92 -- # wait 545959 00:22:11.447 0 00:22:11.447 16:33:20 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:11.447 [2024-04-26 16:33:13.675457] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:22:11.447 [2024-04-26 16:33:13.675526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545180 ] 00:22:11.447 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.447 [2024-04-26 16:33:13.751522] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.447 [2024-04-26 16:33:13.827241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.447 [2024-04-26 16:33:15.981020] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:22:11.447 [2024-04-26 16:33:15.981506] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:11.447 [2024-04-26 16:33:15.981541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:11.447 [2024-04-26 16:33:15.997971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:11.447 [2024-04-26 16:33:16.014357] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:11.447 Running I/O for 1 seconds... 00:22:11.447 00:22:11.447 Latency(us) 00:22:11.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.447 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:11.447 Verification LBA range: start 0x0 length 0x4000 00:22:11.447 NVMe0n1 : 1.01 18068.91 70.58 0.00 0.00 7045.36 2550.21 10827.69 00:22:11.447 =================================================================================================================== 00:22:11.447 Total : 18068.91 70.58 0.00 0.00 7045.36 2550.21 10827.69 00:22:11.447 16:33:20 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.447 16:33:20 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:11.705 16:33:20 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:11.962 16:33:20 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:11.962 16:33:20 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:11.962 16:33:20 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:12.220 16:33:21 -- host/failover.sh@101 -- # sleep 3 00:22:15.501 16:33:24 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:15.501 16:33:24 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:15.501 16:33:24 -- host/failover.sh@108 -- # killprocess 545180 00:22:15.501 16:33:24 -- common/autotest_common.sh@936 -- # '[' -z 545180 ']' 00:22:15.501 16:33:24 -- common/autotest_common.sh@940 -- # kill -0 545180 00:22:15.501 16:33:24 -- common/autotest_common.sh@941 -- # uname 00:22:15.501 16:33:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:15.501 16:33:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 545180 00:22:15.501 16:33:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:15.501 16:33:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:15.501 16:33:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 545180' 00:22:15.501 killing process with pid 545180 00:22:15.501 16:33:24 -- common/autotest_common.sh@955 -- # kill 545180 00:22:15.501 16:33:24 -- common/autotest_common.sh@960 -- # wait 545180 00:22:15.759 16:33:24 -- host/failover.sh@110 -- # sync 00:22:15.759 16:33:24 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.759 16:33:24 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:15.759 16:33:24 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:15.759 16:33:24 -- host/failover.sh@116 -- # nvmftestfini 00:22:15.759 16:33:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:15.759 16:33:24 -- nvmf/common.sh@117 -- # sync 00:22:16.018 16:33:24 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:16.018 16:33:24 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:16.018 16:33:24 -- nvmf/common.sh@120 -- # set +e 00:22:16.018 16:33:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.018 16:33:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:16.018 rmmod nvme_rdma 00:22:16.018 rmmod nvme_fabrics 00:22:16.018 16:33:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.018 16:33:24 -- nvmf/common.sh@124 -- # set -e 00:22:16.018 16:33:24 -- nvmf/common.sh@125 -- # return 0 00:22:16.018 16:33:24 -- nvmf/common.sh@478 -- # '[' -n 542196 ']' 00:22:16.018 16:33:24 -- nvmf/common.sh@479 -- # killprocess 542196 00:22:16.018 16:33:24 -- common/autotest_common.sh@936 -- # '[' -z 542196 ']' 00:22:16.018 16:33:24 -- common/autotest_common.sh@940 -- # kill -0 542196 00:22:16.018 16:33:24 -- common/autotest_common.sh@941 -- # uname 00:22:16.018 16:33:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:16.019 16:33:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 542196 00:22:16.019 16:33:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:16.019 16:33:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:16.019 16:33:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 542196' 00:22:16.019 killing process with pid 542196 00:22:16.019 16:33:24 -- common/autotest_common.sh@955 -- # kill 542196 00:22:16.019 16:33:24 -- common/autotest_common.sh@960 -- # wait 542196 00:22:16.019 [2024-04-26 16:33:24.951154] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:22:16.278 16:33:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:16.278 16:33:25 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:16.278 00:22:16.278 real 0m37.109s 00:22:16.278 user 2m4.722s 00:22:16.278 sys 0m7.311s 00:22:16.279 16:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:16.279 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:16.279 ************************************ 00:22:16.279 END TEST nvmf_failover 00:22:16.279 ************************************ 00:22:16.279 16:33:25 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:22:16.279 16:33:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:16.279 16:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.279 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:16.538 ************************************ 00:22:16.538 START TEST nvmf_discovery 00:22:16.538 ************************************ 00:22:16.538 16:33:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:22:16.538 * Looking for test storage... 00:22:16.538 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:16.538 16:33:25 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.538 16:33:25 -- nvmf/common.sh@7 -- # uname -s 00:22:16.538 16:33:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.538 16:33:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.538 16:33:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.538 16:33:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.538 16:33:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.538 16:33:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.538 16:33:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.538 16:33:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.538 16:33:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.538 16:33:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.538 16:33:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:22:16.538 16:33:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:22:16.538 16:33:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.538 16:33:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.538 16:33:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.538 16:33:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.538 16:33:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:16.538 16:33:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.538 16:33:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.538 16:33:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.538 16:33:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.538 16:33:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.538 16:33:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.538 16:33:25 -- paths/export.sh@5 -- # export PATH 00:22:16.538 16:33:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.538 16:33:25 -- nvmf/common.sh@47 -- # : 0 00:22:16.538 16:33:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.538 16:33:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.538 16:33:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.538 16:33:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.538 16:33:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.538 16:33:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.538 16:33:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.538 16:33:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.538 16:33:25 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:22:16.538 16:33:25 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:16.538 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:16.538 16:33:25 -- host/discovery.sh@13 -- # exit 0 00:22:16.538 00:22:16.538 real 0m0.142s 00:22:16.538 user 0m0.070s 00:22:16.538 sys 0m0.084s 00:22:16.538 16:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:16.538 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:16.538 ************************************ 00:22:16.538 END TEST nvmf_discovery 00:22:16.538 ************************************ 00:22:16.538 16:33:25 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:16.538 16:33:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:16.538 16:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.538 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:16.797 ************************************ 00:22:16.797 START TEST nvmf_discovery_remove_ifc 00:22:16.797 ************************************ 00:22:16.797 16:33:25 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:17.057 * Looking for test storage... 00:22:17.057 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:17.057 16:33:25 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.057 16:33:25 -- nvmf/common.sh@7 -- # uname -s 00:22:17.057 16:33:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.057 16:33:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.057 16:33:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.057 16:33:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.057 16:33:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.057 16:33:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.057 16:33:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.057 16:33:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.057 16:33:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.057 16:33:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.057 16:33:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:22:17.057 16:33:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:22:17.057 16:33:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.057 16:33:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.057 16:33:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.057 16:33:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.057 16:33:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:17.057 16:33:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.057 16:33:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.057 16:33:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.057 16:33:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.057 16:33:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.057 16:33:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.057 16:33:25 -- paths/export.sh@5 -- # export PATH 00:22:17.057 16:33:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.057 16:33:25 -- nvmf/common.sh@47 -- # : 0 00:22:17.057 16:33:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.057 16:33:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.057 16:33:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.057 16:33:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.057 16:33:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.057 16:33:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.057 16:33:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.057 16:33:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.057 16:33:25 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:22:17.057 16:33:25 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:17.057 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:17.057 16:33:25 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:22:17.057 00:22:17.057 real 0m0.143s 00:22:17.057 user 0m0.068s 00:22:17.057 sys 0m0.086s 00:22:17.057 16:33:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:17.057 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:17.057 ************************************ 00:22:17.057 END TEST nvmf_discovery_remove_ifc 00:22:17.057 ************************************ 00:22:17.057 16:33:25 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:17.057 16:33:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:17.057 16:33:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:17.057 16:33:25 -- common/autotest_common.sh@10 -- # set +x 00:22:17.057 ************************************ 00:22:17.057 START TEST nvmf_identify_kernel_target 00:22:17.057 ************************************ 00:22:17.057 16:33:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:17.317 * Looking for test storage... 00:22:17.317 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:17.317 16:33:26 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.317 16:33:26 -- nvmf/common.sh@7 -- # uname -s 00:22:17.317 16:33:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.317 16:33:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.317 16:33:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.317 16:33:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.317 16:33:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.317 16:33:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.317 16:33:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.317 16:33:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.317 16:33:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.317 16:33:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.317 16:33:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:22:17.317 16:33:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:22:17.317 16:33:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.317 16:33:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.317 16:33:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.317 16:33:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.317 16:33:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:17.317 16:33:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.317 16:33:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.317 16:33:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.317 16:33:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.317 16:33:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.317 16:33:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.317 16:33:26 -- paths/export.sh@5 -- # export PATH 00:22:17.317 16:33:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.317 16:33:26 -- nvmf/common.sh@47 -- # : 0 00:22:17.317 16:33:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:17.317 16:33:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:17.317 16:33:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.317 16:33:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.317 16:33:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.317 16:33:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:17.317 16:33:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:17.317 16:33:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:17.317 16:33:26 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:17.317 16:33:26 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:22:17.317 16:33:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.317 16:33:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:17.317 16:33:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:17.317 16:33:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:17.317 16:33:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.317 16:33:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.317 16:33:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.318 16:33:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:17.318 16:33:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:17.318 16:33:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.318 16:33:26 -- common/autotest_common.sh@10 -- # set +x 00:22:23.880 16:33:32 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:23.880 16:33:32 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:23.880 16:33:32 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:23.880 16:33:32 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:23.880 16:33:32 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:23.880 16:33:32 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:23.880 16:33:32 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:23.880 16:33:32 -- nvmf/common.sh@295 -- # net_devs=() 00:22:23.880 16:33:32 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:23.880 16:33:32 -- nvmf/common.sh@296 -- # e810=() 00:22:23.880 16:33:32 -- nvmf/common.sh@296 -- # local -ga e810 00:22:23.880 16:33:32 -- nvmf/common.sh@297 -- # x722=() 00:22:23.880 16:33:32 -- nvmf/common.sh@297 -- # local -ga x722 00:22:23.880 16:33:32 -- nvmf/common.sh@298 -- # mlx=() 00:22:23.880 16:33:32 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:23.880 16:33:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.880 16:33:32 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.880 16:33:32 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.880 16:33:32 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.880 16:33:32 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.880 16:33:32 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.880 16:33:32 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.881 16:33:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.881 16:33:32 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.881 16:33:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.881 16:33:32 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.881 16:33:32 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:23.881 16:33:32 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:23.881 16:33:32 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:23.881 16:33:32 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:23.881 16:33:32 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:23.881 16:33:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:22:23.881 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:22:23.881 16:33:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:23.881 16:33:32 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:22:23.881 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:22:23.881 16:33:32 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:23.881 16:33:32 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:23.881 16:33:32 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.881 16:33:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:23.881 16:33:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.881 16:33:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:23.881 Found net devices under 0000:18:00.0: mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.881 16:33:32 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.881 16:33:32 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:23.881 16:33:32 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.881 16:33:32 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:23.881 Found net devices under 0000:18:00.1: mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.881 16:33:32 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:23.881 16:33:32 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:23.881 16:33:32 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:23.881 16:33:32 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:23.881 16:33:32 -- nvmf/common.sh@58 -- # uname 00:22:23.881 16:33:32 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:23.881 16:33:32 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:23.881 16:33:32 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:23.881 16:33:32 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:23.881 16:33:32 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:23.881 16:33:32 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:23.881 16:33:32 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:23.881 16:33:32 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:23.881 16:33:32 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:23.881 16:33:32 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:23.881 16:33:32 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:23.881 16:33:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:23.881 16:33:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:23.881 16:33:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:23.881 16:33:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:23.881 16:33:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:23.881 16:33:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@105 -- # continue 2 00:22:23.881 16:33:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@105 -- # continue 2 00:22:23.881 16:33:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:23.881 16:33:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:23.881 16:33:32 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:23.881 16:33:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:23.881 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:23.881 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:22:23.881 altname enp24s0f0np0 00:22:23.881 altname ens785f0np0 00:22:23.881 inet 192.168.100.8/24 scope global mlx_0_0 00:22:23.881 valid_lft forever preferred_lft forever 00:22:23.881 16:33:32 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:23.881 16:33:32 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:23.881 16:33:32 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:23.881 16:33:32 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:23.881 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:23.881 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:22:23.881 altname enp24s0f1np1 00:22:23.881 altname ens785f1np1 00:22:23.881 inet 192.168.100.9/24 scope global mlx_0_1 00:22:23.881 valid_lft forever preferred_lft forever 00:22:23.881 16:33:32 -- nvmf/common.sh@411 -- # return 0 00:22:23.881 16:33:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:23.881 16:33:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:23.881 16:33:32 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:23.881 16:33:32 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:23.881 16:33:32 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:23.881 16:33:32 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:23.881 16:33:32 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:23.881 16:33:32 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:23.881 16:33:32 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:23.881 16:33:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@105 -- # continue 2 00:22:23.881 16:33:32 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:23.881 16:33:32 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:23.881 16:33:32 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@105 -- # continue 2 00:22:23.881 16:33:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:23.881 16:33:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:23.881 16:33:32 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:23.881 16:33:32 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:23.881 16:33:32 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:23.881 16:33:32 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:23.881 192.168.100.9' 00:22:23.881 16:33:32 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:23.881 192.168.100.9' 00:22:23.881 16:33:32 -- nvmf/common.sh@446 -- # head -n 1 00:22:23.881 16:33:32 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:23.881 16:33:32 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:23.881 192.168.100.9' 00:22:23.881 16:33:32 -- nvmf/common.sh@447 -- # tail -n +2 00:22:23.881 16:33:32 -- nvmf/common.sh@447 -- # head -n 1 00:22:23.881 16:33:32 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:23.882 16:33:32 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:23.882 16:33:32 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:23.882 16:33:32 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:23.882 16:33:32 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:23.882 16:33:32 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:23.882 16:33:32 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:23.882 16:33:32 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:23.882 16:33:32 -- nvmf/common.sh@717 -- # local ip 00:22:23.882 16:33:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:23.882 16:33:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:23.882 16:33:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:23.882 16:33:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:23.882 16:33:32 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:23.882 16:33:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:23.882 16:33:32 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:23.882 16:33:32 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:23.882 16:33:32 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:23.882 16:33:32 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:22:23.882 16:33:32 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:22:23.882 16:33:32 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:22:23.882 16:33:32 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:23.882 16:33:32 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:23.882 16:33:32 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:23.882 16:33:32 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:23.882 16:33:32 -- nvmf/common.sh@628 -- # local block nvme 00:22:23.882 16:33:32 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:23.882 16:33:32 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:23.882 16:33:32 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:23.882 16:33:32 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:26.417 Waiting for block devices as requested 00:22:26.417 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:22:26.677 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:22:26.677 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:26.677 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:26.937 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:26.937 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:26.937 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:26.937 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:27.198 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:27.198 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:27.198 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:22:27.457 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:27.457 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:27.457 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:27.716 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:27.716 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:27.975 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:27.975 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:27.975 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:28.234 16:33:37 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:28.234 16:33:37 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:28.234 16:33:37 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:28.234 16:33:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:28.234 16:33:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:28.234 16:33:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:28.234 16:33:37 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:28.234 16:33:37 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:28.234 16:33:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:28.234 No valid GPT data, bailing 00:22:28.234 16:33:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:28.234 16:33:37 -- scripts/common.sh@391 -- # pt= 00:22:28.234 16:33:37 -- scripts/common.sh@392 -- # return 1 00:22:28.234 16:33:37 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:28.234 16:33:37 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:28.234 16:33:37 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:28.234 16:33:37 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:22:28.234 16:33:37 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:28.234 16:33:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:28.234 16:33:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:28.234 16:33:37 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:22:28.234 16:33:37 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:28.234 16:33:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:22:28.234 No valid GPT data, bailing 00:22:28.234 16:33:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:28.234 16:33:37 -- scripts/common.sh@391 -- # pt= 00:22:28.234 16:33:37 -- scripts/common.sh@392 -- # return 1 00:22:28.234 16:33:37 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:22:28.234 16:33:37 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:28.234 16:33:37 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme2n1 ]] 00:22:28.234 16:33:37 -- nvmf/common.sh@641 -- # is_block_zoned nvme2n1 00:22:28.234 16:33:37 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:22:28.234 16:33:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:28.234 16:33:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:28.234 16:33:37 -- nvmf/common.sh@642 -- # block_in_use nvme2n1 00:22:28.234 16:33:37 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:22:28.234 16:33:37 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:22:28.234 No valid GPT data, bailing 00:22:28.234 16:33:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:22:28.234 16:33:37 -- scripts/common.sh@391 -- # pt= 00:22:28.234 16:33:37 -- scripts/common.sh@392 -- # return 1 00:22:28.234 16:33:37 -- nvmf/common.sh@642 -- # nvme=/dev/nvme2n1 00:22:28.234 16:33:37 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme2n1 ]] 00:22:28.234 16:33:37 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:28.234 16:33:37 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:28.234 16:33:37 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:28.234 16:33:37 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:28.234 16:33:37 -- nvmf/common.sh@656 -- # echo 1 00:22:28.234 16:33:37 -- nvmf/common.sh@657 -- # echo /dev/nvme2n1 00:22:28.234 16:33:37 -- nvmf/common.sh@658 -- # echo 1 00:22:28.234 16:33:37 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:22:28.234 16:33:37 -- nvmf/common.sh@661 -- # echo rdma 00:22:28.234 16:33:37 -- nvmf/common.sh@662 -- # echo 4420 00:22:28.234 16:33:37 -- nvmf/common.sh@663 -- # echo ipv4 00:22:28.234 16:33:37 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:28.234 16:33:37 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -t rdma -s 4420 00:22:28.493 00:22:28.493 Discovery Log Number of Records 2, Generation counter 2 00:22:28.493 =====Discovery Log Entry 0====== 00:22:28.493 trtype: rdma 00:22:28.493 adrfam: ipv4 00:22:28.493 subtype: current discovery subsystem 00:22:28.493 treq: not specified, sq flow control disable supported 00:22:28.493 portid: 1 00:22:28.493 trsvcid: 4420 00:22:28.493 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:28.493 traddr: 192.168.100.8 00:22:28.493 eflags: none 00:22:28.493 rdma_prtype: not specified 00:22:28.493 rdma_qptype: connected 00:22:28.493 rdma_cms: rdma-cm 00:22:28.493 rdma_pkey: 0x0000 00:22:28.493 =====Discovery Log Entry 1====== 00:22:28.493 trtype: rdma 00:22:28.493 adrfam: ipv4 00:22:28.493 subtype: nvme subsystem 00:22:28.493 treq: not specified, sq flow control disable supported 00:22:28.493 portid: 1 00:22:28.494 trsvcid: 4420 00:22:28.494 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:28.494 traddr: 192.168.100.8 00:22:28.494 eflags: none 00:22:28.494 rdma_prtype: not specified 00:22:28.494 rdma_qptype: connected 00:22:28.494 rdma_cms: rdma-cm 00:22:28.494 rdma_pkey: 0x0000 00:22:28.494 16:33:37 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:22:28.494 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:28.494 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.494 ===================================================== 00:22:28.494 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:28.494 ===================================================== 00:22:28.494 Controller Capabilities/Features 00:22:28.494 ================================ 00:22:28.494 Vendor ID: 0000 00:22:28.494 Subsystem Vendor ID: 0000 00:22:28.494 Serial Number: a06b04ba673f075396fa 00:22:28.494 Model Number: Linux 00:22:28.494 Firmware Version: 6.7.0-68 00:22:28.494 Recommended Arb Burst: 0 00:22:28.494 IEEE OUI Identifier: 00 00 00 00:22:28.494 Multi-path I/O 00:22:28.494 May have multiple subsystem ports: No 00:22:28.494 May have multiple controllers: No 00:22:28.494 Associated with SR-IOV VF: No 00:22:28.494 Max Data Transfer Size: Unlimited 00:22:28.494 Max Number of Namespaces: 0 00:22:28.494 Max Number of I/O Queues: 1024 00:22:28.494 NVMe Specification Version (VS): 1.3 00:22:28.494 NVMe Specification Version (Identify): 1.3 00:22:28.494 Maximum Queue Entries: 128 00:22:28.494 Contiguous Queues Required: No 00:22:28.494 Arbitration Mechanisms Supported 00:22:28.494 Weighted Round Robin: Not Supported 00:22:28.494 Vendor Specific: Not Supported 00:22:28.494 Reset Timeout: 7500 ms 00:22:28.494 Doorbell Stride: 4 bytes 00:22:28.494 NVM Subsystem Reset: Not Supported 00:22:28.494 Command Sets Supported 00:22:28.494 NVM Command Set: Supported 00:22:28.494 Boot Partition: Not Supported 00:22:28.494 Memory Page Size Minimum: 4096 bytes 00:22:28.494 Memory Page Size Maximum: 4096 bytes 00:22:28.494 Persistent Memory Region: Not Supported 00:22:28.494 Optional Asynchronous Events Supported 00:22:28.494 Namespace Attribute Notices: Not Supported 00:22:28.494 Firmware Activation Notices: Not Supported 00:22:28.494 ANA Change Notices: Not Supported 00:22:28.494 PLE Aggregate Log Change Notices: Not Supported 00:22:28.494 LBA Status Info Alert Notices: Not Supported 00:22:28.494 EGE Aggregate Log Change Notices: Not Supported 00:22:28.494 Normal NVM Subsystem Shutdown event: Not Supported 00:22:28.494 Zone Descriptor Change Notices: Not Supported 00:22:28.494 Discovery Log Change Notices: Supported 00:22:28.494 Controller Attributes 00:22:28.494 128-bit Host Identifier: Not Supported 00:22:28.494 Non-Operational Permissive Mode: Not Supported 00:22:28.494 NVM Sets: Not Supported 00:22:28.494 Read Recovery Levels: Not Supported 00:22:28.494 Endurance Groups: Not Supported 00:22:28.494 Predictable Latency Mode: Not Supported 00:22:28.494 Traffic Based Keep ALive: Not Supported 00:22:28.494 Namespace Granularity: Not Supported 00:22:28.494 SQ Associations: Not Supported 00:22:28.494 UUID List: Not Supported 00:22:28.494 Multi-Domain Subsystem: Not Supported 00:22:28.494 Fixed Capacity Management: Not Supported 00:22:28.494 Variable Capacity Management: Not Supported 00:22:28.494 Delete Endurance Group: Not Supported 00:22:28.494 Delete NVM Set: Not Supported 00:22:28.494 Extended LBA Formats Supported: Not Supported 00:22:28.494 Flexible Data Placement Supported: Not Supported 00:22:28.494 00:22:28.494 Controller Memory Buffer Support 00:22:28.494 ================================ 00:22:28.494 Supported: No 00:22:28.494 00:22:28.494 Persistent Memory Region Support 00:22:28.494 ================================ 00:22:28.494 Supported: No 00:22:28.494 00:22:28.494 Admin Command Set Attributes 00:22:28.494 ============================ 00:22:28.494 Security Send/Receive: Not Supported 00:22:28.494 Format NVM: Not Supported 00:22:28.494 Firmware Activate/Download: Not Supported 00:22:28.494 Namespace Management: Not Supported 00:22:28.494 Device Self-Test: Not Supported 00:22:28.494 Directives: Not Supported 00:22:28.494 NVMe-MI: Not Supported 00:22:28.494 Virtualization Management: Not Supported 00:22:28.494 Doorbell Buffer Config: Not Supported 00:22:28.494 Get LBA Status Capability: Not Supported 00:22:28.494 Command & Feature Lockdown Capability: Not Supported 00:22:28.494 Abort Command Limit: 1 00:22:28.494 Async Event Request Limit: 1 00:22:28.494 Number of Firmware Slots: N/A 00:22:28.494 Firmware Slot 1 Read-Only: N/A 00:22:28.494 Firmware Activation Without Reset: N/A 00:22:28.494 Multiple Update Detection Support: N/A 00:22:28.494 Firmware Update Granularity: No Information Provided 00:22:28.494 Per-Namespace SMART Log: No 00:22:28.494 Asymmetric Namespace Access Log Page: Not Supported 00:22:28.494 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:28.494 Command Effects Log Page: Not Supported 00:22:28.494 Get Log Page Extended Data: Supported 00:22:28.494 Telemetry Log Pages: Not Supported 00:22:28.494 Persistent Event Log Pages: Not Supported 00:22:28.494 Supported Log Pages Log Page: May Support 00:22:28.494 Commands Supported & Effects Log Page: Not Supported 00:22:28.494 Feature Identifiers & Effects Log Page:May Support 00:22:28.494 NVMe-MI Commands & Effects Log Page: May Support 00:22:28.494 Data Area 4 for Telemetry Log: Not Supported 00:22:28.494 Error Log Page Entries Supported: 1 00:22:28.494 Keep Alive: Not Supported 00:22:28.494 00:22:28.494 NVM Command Set Attributes 00:22:28.494 ========================== 00:22:28.494 Submission Queue Entry Size 00:22:28.494 Max: 1 00:22:28.494 Min: 1 00:22:28.494 Completion Queue Entry Size 00:22:28.494 Max: 1 00:22:28.494 Min: 1 00:22:28.494 Number of Namespaces: 0 00:22:28.494 Compare Command: Not Supported 00:22:28.494 Write Uncorrectable Command: Not Supported 00:22:28.494 Dataset Management Command: Not Supported 00:22:28.494 Write Zeroes Command: Not Supported 00:22:28.494 Set Features Save Field: Not Supported 00:22:28.494 Reservations: Not Supported 00:22:28.494 Timestamp: Not Supported 00:22:28.494 Copy: Not Supported 00:22:28.494 Volatile Write Cache: Not Present 00:22:28.494 Atomic Write Unit (Normal): 1 00:22:28.494 Atomic Write Unit (PFail): 1 00:22:28.494 Atomic Compare & Write Unit: 1 00:22:28.494 Fused Compare & Write: Not Supported 00:22:28.494 Scatter-Gather List 00:22:28.494 SGL Command Set: Supported 00:22:28.494 SGL Keyed: Supported 00:22:28.494 SGL Bit Bucket Descriptor: Not Supported 00:22:28.494 SGL Metadata Pointer: Not Supported 00:22:28.494 Oversized SGL: Not Supported 00:22:28.494 SGL Metadata Address: Not Supported 00:22:28.494 SGL Offset: Supported 00:22:28.494 Transport SGL Data Block: Not Supported 00:22:28.494 Replay Protected Memory Block: Not Supported 00:22:28.494 00:22:28.494 Firmware Slot Information 00:22:28.494 ========================= 00:22:28.494 Active slot: 0 00:22:28.494 00:22:28.494 00:22:28.494 Error Log 00:22:28.494 ========= 00:22:28.494 00:22:28.494 Active Namespaces 00:22:28.494 ================= 00:22:28.494 Discovery Log Page 00:22:28.494 ================== 00:22:28.494 Generation Counter: 2 00:22:28.494 Number of Records: 2 00:22:28.494 Record Format: 0 00:22:28.494 00:22:28.494 Discovery Log Entry 0 00:22:28.494 ---------------------- 00:22:28.494 Transport Type: 1 (RDMA) 00:22:28.494 Address Family: 1 (IPv4) 00:22:28.494 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:28.494 Entry Flags: 00:22:28.494 Duplicate Returned Information: 0 00:22:28.494 Explicit Persistent Connection Support for Discovery: 0 00:22:28.494 Transport Requirements: 00:22:28.494 Secure Channel: Not Specified 00:22:28.494 Port ID: 1 (0x0001) 00:22:28.494 Controller ID: 65535 (0xffff) 00:22:28.494 Admin Max SQ Size: 32 00:22:28.494 Transport Service Identifier: 4420 00:22:28.494 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:28.494 Transport Address: 192.168.100.8 00:22:28.494 Transport Specific Address Subtype - RDMA 00:22:28.494 RDMA QP Service Type: 1 (Reliable Connected) 00:22:28.494 RDMA Provider Type: 1 (No provider specified) 00:22:28.494 RDMA CM Service: 1 (RDMA_CM) 00:22:28.494 Discovery Log Entry 1 00:22:28.494 ---------------------- 00:22:28.494 Transport Type: 1 (RDMA) 00:22:28.494 Address Family: 1 (IPv4) 00:22:28.494 Subsystem Type: 2 (NVM Subsystem) 00:22:28.494 Entry Flags: 00:22:28.494 Duplicate Returned Information: 0 00:22:28.494 Explicit Persistent Connection Support for Discovery: 0 00:22:28.494 Transport Requirements: 00:22:28.494 Secure Channel: Not Specified 00:22:28.494 Port ID: 1 (0x0001) 00:22:28.495 Controller ID: 65535 (0xffff) 00:22:28.495 Admin Max SQ Size: 32 00:22:28.495 Transport Service Identifier: 4420 00:22:28.495 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:28.495 Transport Address: 192.168.100.8 00:22:28.495 Transport Specific Address Subtype - RDMA 00:22:28.495 RDMA QP Service Type: 1 (Reliable Connected) 00:22:28.495 RDMA Provider Type: 1 (No provider specified) 00:22:28.495 RDMA CM Service: 1 (RDMA_CM) 00:22:28.495 16:33:37 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:28.495 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.754 get_feature(0x01) failed 00:22:28.754 get_feature(0x02) failed 00:22:28.754 get_feature(0x04) failed 00:22:28.754 ===================================================== 00:22:28.754 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:22:28.754 ===================================================== 00:22:28.754 Controller Capabilities/Features 00:22:28.754 ================================ 00:22:28.754 Vendor ID: 0000 00:22:28.754 Subsystem Vendor ID: 0000 00:22:28.754 Serial Number: ba8909efe9696eb1cc6c 00:22:28.754 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:28.754 Firmware Version: 6.7.0-68 00:22:28.754 Recommended Arb Burst: 6 00:22:28.754 IEEE OUI Identifier: 00 00 00 00:22:28.754 Multi-path I/O 00:22:28.754 May have multiple subsystem ports: Yes 00:22:28.754 May have multiple controllers: Yes 00:22:28.754 Associated with SR-IOV VF: No 00:22:28.754 Max Data Transfer Size: 1048576 00:22:28.754 Max Number of Namespaces: 1024 00:22:28.754 Max Number of I/O Queues: 128 00:22:28.754 NVMe Specification Version (VS): 1.3 00:22:28.754 NVMe Specification Version (Identify): 1.3 00:22:28.754 Maximum Queue Entries: 128 00:22:28.754 Contiguous Queues Required: No 00:22:28.754 Arbitration Mechanisms Supported 00:22:28.754 Weighted Round Robin: Not Supported 00:22:28.754 Vendor Specific: Not Supported 00:22:28.754 Reset Timeout: 7500 ms 00:22:28.754 Doorbell Stride: 4 bytes 00:22:28.754 NVM Subsystem Reset: Not Supported 00:22:28.754 Command Sets Supported 00:22:28.754 NVM Command Set: Supported 00:22:28.754 Boot Partition: Not Supported 00:22:28.754 Memory Page Size Minimum: 4096 bytes 00:22:28.754 Memory Page Size Maximum: 4096 bytes 00:22:28.754 Persistent Memory Region: Not Supported 00:22:28.754 Optional Asynchronous Events Supported 00:22:28.754 Namespace Attribute Notices: Supported 00:22:28.754 Firmware Activation Notices: Not Supported 00:22:28.754 ANA Change Notices: Supported 00:22:28.755 PLE Aggregate Log Change Notices: Not Supported 00:22:28.755 LBA Status Info Alert Notices: Not Supported 00:22:28.755 EGE Aggregate Log Change Notices: Not Supported 00:22:28.755 Normal NVM Subsystem Shutdown event: Not Supported 00:22:28.755 Zone Descriptor Change Notices: Not Supported 00:22:28.755 Discovery Log Change Notices: Not Supported 00:22:28.755 Controller Attributes 00:22:28.755 128-bit Host Identifier: Supported 00:22:28.755 Non-Operational Permissive Mode: Not Supported 00:22:28.755 NVM Sets: Not Supported 00:22:28.755 Read Recovery Levels: Not Supported 00:22:28.755 Endurance Groups: Not Supported 00:22:28.755 Predictable Latency Mode: Not Supported 00:22:28.755 Traffic Based Keep ALive: Supported 00:22:28.755 Namespace Granularity: Not Supported 00:22:28.755 SQ Associations: Not Supported 00:22:28.755 UUID List: Not Supported 00:22:28.755 Multi-Domain Subsystem: Not Supported 00:22:28.755 Fixed Capacity Management: Not Supported 00:22:28.755 Variable Capacity Management: Not Supported 00:22:28.755 Delete Endurance Group: Not Supported 00:22:28.755 Delete NVM Set: Not Supported 00:22:28.755 Extended LBA Formats Supported: Not Supported 00:22:28.755 Flexible Data Placement Supported: Not Supported 00:22:28.755 00:22:28.755 Controller Memory Buffer Support 00:22:28.755 ================================ 00:22:28.755 Supported: No 00:22:28.755 00:22:28.755 Persistent Memory Region Support 00:22:28.755 ================================ 00:22:28.755 Supported: No 00:22:28.755 00:22:28.755 Admin Command Set Attributes 00:22:28.755 ============================ 00:22:28.755 Security Send/Receive: Not Supported 00:22:28.755 Format NVM: Not Supported 00:22:28.755 Firmware Activate/Download: Not Supported 00:22:28.755 Namespace Management: Not Supported 00:22:28.755 Device Self-Test: Not Supported 00:22:28.755 Directives: Not Supported 00:22:28.755 NVMe-MI: Not Supported 00:22:28.755 Virtualization Management: Not Supported 00:22:28.755 Doorbell Buffer Config: Not Supported 00:22:28.755 Get LBA Status Capability: Not Supported 00:22:28.755 Command & Feature Lockdown Capability: Not Supported 00:22:28.755 Abort Command Limit: 4 00:22:28.755 Async Event Request Limit: 4 00:22:28.755 Number of Firmware Slots: N/A 00:22:28.755 Firmware Slot 1 Read-Only: N/A 00:22:28.755 Firmware Activation Without Reset: N/A 00:22:28.755 Multiple Update Detection Support: N/A 00:22:28.755 Firmware Update Granularity: No Information Provided 00:22:28.755 Per-Namespace SMART Log: Yes 00:22:28.755 Asymmetric Namespace Access Log Page: Supported 00:22:28.755 ANA Transition Time : 10 sec 00:22:28.755 00:22:28.755 Asymmetric Namespace Access Capabilities 00:22:28.755 ANA Optimized State : Supported 00:22:28.755 ANA Non-Optimized State : Supported 00:22:28.755 ANA Inaccessible State : Supported 00:22:28.755 ANA Persistent Loss State : Supported 00:22:28.755 ANA Change State : Supported 00:22:28.755 ANAGRPID is not changed : No 00:22:28.755 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:28.755 00:22:28.755 ANA Group Identifier Maximum : 128 00:22:28.755 Number of ANA Group Identifiers : 128 00:22:28.755 Max Number of Allowed Namespaces : 1024 00:22:28.755 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:28.755 Command Effects Log Page: Supported 00:22:28.755 Get Log Page Extended Data: Supported 00:22:28.755 Telemetry Log Pages: Not Supported 00:22:28.755 Persistent Event Log Pages: Not Supported 00:22:28.755 Supported Log Pages Log Page: May Support 00:22:28.755 Commands Supported & Effects Log Page: Not Supported 00:22:28.755 Feature Identifiers & Effects Log Page:May Support 00:22:28.755 NVMe-MI Commands & Effects Log Page: May Support 00:22:28.755 Data Area 4 for Telemetry Log: Not Supported 00:22:28.755 Error Log Page Entries Supported: 128 00:22:28.755 Keep Alive: Supported 00:22:28.755 Keep Alive Granularity: 1000 ms 00:22:28.755 00:22:28.755 NVM Command Set Attributes 00:22:28.755 ========================== 00:22:28.755 Submission Queue Entry Size 00:22:28.755 Max: 64 00:22:28.755 Min: 64 00:22:28.755 Completion Queue Entry Size 00:22:28.755 Max: 16 00:22:28.755 Min: 16 00:22:28.755 Number of Namespaces: 1024 00:22:28.755 Compare Command: Not Supported 00:22:28.755 Write Uncorrectable Command: Not Supported 00:22:28.755 Dataset Management Command: Supported 00:22:28.755 Write Zeroes Command: Supported 00:22:28.755 Set Features Save Field: Not Supported 00:22:28.755 Reservations: Not Supported 00:22:28.755 Timestamp: Not Supported 00:22:28.755 Copy: Not Supported 00:22:28.755 Volatile Write Cache: Present 00:22:28.755 Atomic Write Unit (Normal): 1 00:22:28.755 Atomic Write Unit (PFail): 1 00:22:28.755 Atomic Compare & Write Unit: 1 00:22:28.755 Fused Compare & Write: Not Supported 00:22:28.755 Scatter-Gather List 00:22:28.755 SGL Command Set: Supported 00:22:28.755 SGL Keyed: Supported 00:22:28.755 SGL Bit Bucket Descriptor: Not Supported 00:22:28.755 SGL Metadata Pointer: Not Supported 00:22:28.755 Oversized SGL: Not Supported 00:22:28.755 SGL Metadata Address: Not Supported 00:22:28.755 SGL Offset: Supported 00:22:28.755 Transport SGL Data Block: Not Supported 00:22:28.755 Replay Protected Memory Block: Not Supported 00:22:28.755 00:22:28.755 Firmware Slot Information 00:22:28.755 ========================= 00:22:28.755 Active slot: 0 00:22:28.755 00:22:28.755 Asymmetric Namespace Access 00:22:28.755 =========================== 00:22:28.755 Change Count : 0 00:22:28.755 Number of ANA Group Descriptors : 1 00:22:28.755 ANA Group Descriptor : 0 00:22:28.755 ANA Group ID : 1 00:22:28.755 Number of NSID Values : 1 00:22:28.755 Change Count : 0 00:22:28.755 ANA State : 1 00:22:28.755 Namespace Identifier : 1 00:22:28.755 00:22:28.755 Commands Supported and Effects 00:22:28.755 ============================== 00:22:28.755 Admin Commands 00:22:28.755 -------------- 00:22:28.755 Get Log Page (02h): Supported 00:22:28.755 Identify (06h): Supported 00:22:28.755 Abort (08h): Supported 00:22:28.755 Set Features (09h): Supported 00:22:28.755 Get Features (0Ah): Supported 00:22:28.755 Asynchronous Event Request (0Ch): Supported 00:22:28.755 Keep Alive (18h): Supported 00:22:28.755 I/O Commands 00:22:28.755 ------------ 00:22:28.755 Flush (00h): Supported 00:22:28.755 Write (01h): Supported LBA-Change 00:22:28.755 Read (02h): Supported 00:22:28.755 Write Zeroes (08h): Supported LBA-Change 00:22:28.755 Dataset Management (09h): Supported 00:22:28.755 00:22:28.755 Error Log 00:22:28.755 ========= 00:22:28.755 Entry: 0 00:22:28.755 Error Count: 0x3 00:22:28.755 Submission Queue Id: 0x0 00:22:28.755 Command Id: 0x5 00:22:28.755 Phase Bit: 0 00:22:28.755 Status Code: 0x2 00:22:28.755 Status Code Type: 0x0 00:22:28.755 Do Not Retry: 1 00:22:28.755 Error Location: 0x28 00:22:28.755 LBA: 0x0 00:22:28.755 Namespace: 0x0 00:22:28.755 Vendor Log Page: 0x0 00:22:28.755 ----------- 00:22:28.755 Entry: 1 00:22:28.755 Error Count: 0x2 00:22:28.755 Submission Queue Id: 0x0 00:22:28.755 Command Id: 0x5 00:22:28.755 Phase Bit: 0 00:22:28.755 Status Code: 0x2 00:22:28.755 Status Code Type: 0x0 00:22:28.755 Do Not Retry: 1 00:22:28.755 Error Location: 0x28 00:22:28.755 LBA: 0x0 00:22:28.755 Namespace: 0x0 00:22:28.755 Vendor Log Page: 0x0 00:22:28.755 ----------- 00:22:28.755 Entry: 2 00:22:28.755 Error Count: 0x1 00:22:28.755 Submission Queue Id: 0x0 00:22:28.755 Command Id: 0x0 00:22:28.755 Phase Bit: 0 00:22:28.755 Status Code: 0x2 00:22:28.755 Status Code Type: 0x0 00:22:28.755 Do Not Retry: 1 00:22:28.755 Error Location: 0x28 00:22:28.755 LBA: 0x0 00:22:28.755 Namespace: 0x0 00:22:28.755 Vendor Log Page: 0x0 00:22:28.755 00:22:28.755 Number of Queues 00:22:28.755 ================ 00:22:28.755 Number of I/O Submission Queues: 128 00:22:28.755 Number of I/O Completion Queues: 128 00:22:28.755 00:22:28.755 ZNS Specific Controller Data 00:22:28.755 ============================ 00:22:28.755 Zone Append Size Limit: 0 00:22:28.755 00:22:28.755 00:22:28.755 Active Namespaces 00:22:28.755 ================= 00:22:28.755 get_feature(0x05) failed 00:22:28.755 Namespace ID:1 00:22:28.755 Command Set Identifier: NVM (00h) 00:22:28.755 Deallocate: Supported 00:22:28.755 Deallocated/Unwritten Error: Not Supported 00:22:28.755 Deallocated Read Value: Unknown 00:22:28.755 Deallocate in Write Zeroes: Not Supported 00:22:28.755 Deallocated Guard Field: 0xFFFF 00:22:28.755 Flush: Supported 00:22:28.755 Reservation: Not Supported 00:22:28.755 Namespace Sharing Capabilities: Multiple Controllers 00:22:28.755 Size (in LBAs): 732585168 (349GiB) 00:22:28.755 Capacity (in LBAs): 732585168 (349GiB) 00:22:28.755 Utilization (in LBAs): 732585168 (349GiB) 00:22:28.755 UUID: 6df01a78-c0d0-4046-bf29-88dc6dae3b47 00:22:28.756 Thin Provisioning: Not Supported 00:22:28.756 Per-NS Atomic Units: Yes 00:22:28.756 Atomic Boundary Size (Normal): 0 00:22:28.756 Atomic Boundary Size (PFail): 0 00:22:28.756 Atomic Boundary Offset: 0 00:22:28.756 NGUID/EUI64 Never Reused: No 00:22:28.756 ANA group ID: 1 00:22:28.756 Namespace Write Protected: No 00:22:28.756 Number of LBA Formats: 1 00:22:28.756 Current LBA Format: LBA Format #00 00:22:28.756 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:28.756 00:22:28.756 16:33:37 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:28.756 16:33:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:28.756 16:33:37 -- nvmf/common.sh@117 -- # sync 00:22:28.756 16:33:37 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:28.756 16:33:37 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:28.756 16:33:37 -- nvmf/common.sh@120 -- # set +e 00:22:28.756 16:33:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:28.756 16:33:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:28.756 rmmod nvme_rdma 00:22:28.756 rmmod nvme_fabrics 00:22:28.756 16:33:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:28.756 16:33:37 -- nvmf/common.sh@124 -- # set -e 00:22:28.756 16:33:37 -- nvmf/common.sh@125 -- # return 0 00:22:28.756 16:33:37 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:22:28.756 16:33:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:28.756 16:33:37 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:22:28.756 16:33:37 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:28.756 16:33:37 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:28.756 16:33:37 -- nvmf/common.sh@675 -- # echo 0 00:22:28.756 16:33:37 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:28.756 16:33:37 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:28.756 16:33:37 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:28.756 16:33:37 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:28.756 16:33:37 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:22:28.756 16:33:37 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:22:28.756 16:33:37 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:32.045 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:22:32.045 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:22:32.045 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:22:32.045 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:32.045 00:22:32.045 real 0m14.896s 00:22:32.045 user 0m4.280s 00:22:32.045 sys 0m9.691s 00:22:32.045 16:33:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:32.045 16:33:40 -- common/autotest_common.sh@10 -- # set +x 00:22:32.045 ************************************ 00:22:32.045 END TEST nvmf_identify_kernel_target 00:22:32.045 ************************************ 00:22:32.045 16:33:41 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:32.045 16:33:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:32.045 16:33:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:32.045 16:33:41 -- common/autotest_common.sh@10 -- # set +x 00:22:32.305 ************************************ 00:22:32.305 START TEST nvmf_auth 00:22:32.305 ************************************ 00:22:32.305 16:33:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:32.305 * Looking for test storage... 00:22:32.305 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:32.305 16:33:41 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.305 16:33:41 -- nvmf/common.sh@7 -- # uname -s 00:22:32.305 16:33:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.305 16:33:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.305 16:33:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.305 16:33:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.305 16:33:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.305 16:33:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.305 16:33:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.305 16:33:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.305 16:33:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.305 16:33:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.305 16:33:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:22:32.305 16:33:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:22:32.305 16:33:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.305 16:33:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.305 16:33:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.305 16:33:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.305 16:33:41 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:32.305 16:33:41 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.305 16:33:41 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.305 16:33:41 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.305 16:33:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.305 16:33:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.305 16:33:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.305 16:33:41 -- paths/export.sh@5 -- # export PATH 00:22:32.305 16:33:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.305 16:33:41 -- nvmf/common.sh@47 -- # : 0 00:22:32.305 16:33:41 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.305 16:33:41 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.305 16:33:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.305 16:33:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.305 16:33:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.305 16:33:41 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.305 16:33:41 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.305 16:33:41 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.305 16:33:41 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:32.305 16:33:41 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:32.305 16:33:41 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:32.305 16:33:41 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:32.305 16:33:41 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:32.305 16:33:41 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:32.305 16:33:41 -- host/auth.sh@21 -- # keys=() 00:22:32.305 16:33:41 -- host/auth.sh@77 -- # nvmftestinit 00:22:32.305 16:33:41 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:22:32.305 16:33:41 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.305 16:33:41 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:32.305 16:33:41 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:32.305 16:33:41 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:32.305 16:33:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.305 16:33:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.305 16:33:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.305 16:33:41 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:32.305 16:33:41 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:32.305 16:33:41 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.305 16:33:41 -- common/autotest_common.sh@10 -- # set +x 00:22:38.872 16:33:47 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:38.872 16:33:47 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.872 16:33:47 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.872 16:33:47 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.872 16:33:47 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.872 16:33:47 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.872 16:33:47 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.872 16:33:47 -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.872 16:33:47 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.872 16:33:47 -- nvmf/common.sh@296 -- # e810=() 00:22:38.872 16:33:47 -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.872 16:33:47 -- nvmf/common.sh@297 -- # x722=() 00:22:38.872 16:33:47 -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.872 16:33:47 -- nvmf/common.sh@298 -- # mlx=() 00:22:38.872 16:33:47 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.872 16:33:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.872 16:33:47 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.872 16:33:47 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:38.872 16:33:47 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:38.872 16:33:47 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:38.872 16:33:47 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.872 16:33:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:22:38.872 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:22:38.872 16:33:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:38.872 16:33:47 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:22:38.872 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:22:38.872 16:33:47 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:38.872 16:33:47 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.872 16:33:47 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.872 16:33:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:38.872 16:33:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.872 16:33:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:38.872 Found net devices under 0000:18:00.0: mlx_0_0 00:22:38.872 16:33:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.872 16:33:47 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.872 16:33:47 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:38.872 16:33:47 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.872 16:33:47 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:38.872 Found net devices under 0000:18:00.1: mlx_0_1 00:22:38.872 16:33:47 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.872 16:33:47 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:38.872 16:33:47 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:38.872 16:33:47 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@409 -- # rdma_device_init 00:22:38.872 16:33:47 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:22:38.872 16:33:47 -- nvmf/common.sh@58 -- # uname 00:22:38.872 16:33:47 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:38.872 16:33:47 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:38.872 16:33:47 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:38.872 16:33:47 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:38.872 16:33:47 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:38.872 16:33:47 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:38.872 16:33:47 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:38.872 16:33:47 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:38.872 16:33:47 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:22:38.872 16:33:47 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:38.872 16:33:47 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:38.872 16:33:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:38.872 16:33:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:38.872 16:33:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:38.872 16:33:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:38.872 16:33:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:38.872 16:33:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:38.872 16:33:47 -- nvmf/common.sh@105 -- # continue 2 00:22:38.872 16:33:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.872 16:33:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:38.872 16:33:47 -- nvmf/common.sh@105 -- # continue 2 00:22:38.872 16:33:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:38.872 16:33:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:38.872 16:33:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:38.872 16:33:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:38.872 16:33:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:38.872 16:33:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:38.872 16:33:47 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:38.872 16:33:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:38.872 16:33:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:38.872 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:38.872 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:22:38.872 altname enp24s0f0np0 00:22:38.872 altname ens785f0np0 00:22:38.872 inet 192.168.100.8/24 scope global mlx_0_0 00:22:38.872 valid_lft forever preferred_lft forever 00:22:38.872 16:33:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:38.872 16:33:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:38.872 16:33:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:38.873 16:33:47 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:38.873 16:33:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:38.873 16:33:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:38.873 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:38.873 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:22:38.873 altname enp24s0f1np1 00:22:38.873 altname ens785f1np1 00:22:38.873 inet 192.168.100.9/24 scope global mlx_0_1 00:22:38.873 valid_lft forever preferred_lft forever 00:22:38.873 16:33:47 -- nvmf/common.sh@411 -- # return 0 00:22:38.873 16:33:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:38.873 16:33:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:38.873 16:33:47 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:22:38.873 16:33:47 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:22:38.873 16:33:47 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:38.873 16:33:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:38.873 16:33:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:38.873 16:33:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:38.873 16:33:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:38.873 16:33:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:38.873 16:33:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:38.873 16:33:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.873 16:33:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:38.873 16:33:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:38.873 16:33:47 -- nvmf/common.sh@105 -- # continue 2 00:22:38.873 16:33:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:38.873 16:33:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.873 16:33:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:38.873 16:33:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.873 16:33:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:38.873 16:33:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:38.873 16:33:47 -- nvmf/common.sh@105 -- # continue 2 00:22:38.873 16:33:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:38.873 16:33:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:38.873 16:33:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:38.873 16:33:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:38.873 16:33:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:38.873 16:33:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:38.873 16:33:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:38.873 16:33:47 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:22:38.873 192.168.100.9' 00:22:38.873 16:33:47 -- nvmf/common.sh@446 -- # head -n 1 00:22:38.873 16:33:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:38.873 192.168.100.9' 00:22:38.873 16:33:47 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:38.873 16:33:47 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:22:38.873 192.168.100.9' 00:22:38.873 16:33:47 -- nvmf/common.sh@447 -- # tail -n +2 00:22:38.873 16:33:47 -- nvmf/common.sh@447 -- # head -n 1 00:22:38.873 16:33:47 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:38.873 16:33:47 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:22:38.873 16:33:47 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:38.873 16:33:47 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:22:38.873 16:33:47 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:22:38.873 16:33:47 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:22:38.873 16:33:47 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:22:38.873 16:33:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:38.873 16:33:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:38.873 16:33:47 -- common/autotest_common.sh@10 -- # set +x 00:22:38.873 16:33:47 -- nvmf/common.sh@470 -- # nvmfpid=555560 00:22:38.873 16:33:47 -- nvmf/common.sh@471 -- # waitforlisten 555560 00:22:38.873 16:33:47 -- common/autotest_common.sh@817 -- # '[' -z 555560 ']' 00:22:38.873 16:33:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.873 16:33:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:38.873 16:33:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.873 16:33:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:38.873 16:33:47 -- common/autotest_common.sh@10 -- # set +x 00:22:38.873 16:33:47 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:39.440 16:33:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:39.440 16:33:48 -- common/autotest_common.sh@850 -- # return 0 00:22:39.440 16:33:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:39.440 16:33:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:39.440 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:39.440 16:33:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.440 16:33:48 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:39.440 16:33:48 -- host/auth.sh@81 -- # gen_key null 32 00:22:39.440 16:33:48 -- host/auth.sh@53 -- # local digest len file key 00:22:39.440 16:33:48 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.440 16:33:48 -- host/auth.sh@54 -- # local -A digests 00:22:39.440 16:33:48 -- host/auth.sh@56 -- # digest=null 00:22:39.440 16:33:48 -- host/auth.sh@56 -- # len=32 00:22:39.440 16:33:48 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.440 16:33:48 -- host/auth.sh@57 -- # key=55a15cd3e5aee36564f5721f646bb6f1 00:22:39.440 16:33:48 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:39.440 16:33:48 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.L6n 00:22:39.440 16:33:48 -- host/auth.sh@59 -- # format_dhchap_key 55a15cd3e5aee36564f5721f646bb6f1 0 00:22:39.440 16:33:48 -- nvmf/common.sh@708 -- # format_key DHHC-1 55a15cd3e5aee36564f5721f646bb6f1 0 00:22:39.440 16:33:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:39.440 16:33:48 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:39.440 16:33:48 -- nvmf/common.sh@693 -- # key=55a15cd3e5aee36564f5721f646bb6f1 00:22:39.440 16:33:48 -- nvmf/common.sh@693 -- # digest=0 00:22:39.440 16:33:48 -- nvmf/common.sh@694 -- # python - 00:22:39.440 16:33:48 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.L6n 00:22:39.440 16:33:48 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.L6n 00:22:39.440 16:33:48 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.L6n 00:22:39.440 16:33:48 -- host/auth.sh@82 -- # gen_key null 48 00:22:39.440 16:33:48 -- host/auth.sh@53 -- # local digest len file key 00:22:39.440 16:33:48 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.440 16:33:48 -- host/auth.sh@54 -- # local -A digests 00:22:39.440 16:33:48 -- host/auth.sh@56 -- # digest=null 00:22:39.440 16:33:48 -- host/auth.sh@56 -- # len=48 00:22:39.440 16:33:48 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:39.698 16:33:48 -- host/auth.sh@57 -- # key=1c89cf631d8e658ae96ca52a1bea8187009d66fa4d4c8104 00:22:39.698 16:33:48 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:22:39.698 16:33:48 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.CtS 00:22:39.698 16:33:48 -- host/auth.sh@59 -- # format_dhchap_key 1c89cf631d8e658ae96ca52a1bea8187009d66fa4d4c8104 0 00:22:39.698 16:33:48 -- nvmf/common.sh@708 -- # format_key DHHC-1 1c89cf631d8e658ae96ca52a1bea8187009d66fa4d4c8104 0 00:22:39.698 16:33:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:39.698 16:33:48 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:39.698 16:33:48 -- nvmf/common.sh@693 -- # key=1c89cf631d8e658ae96ca52a1bea8187009d66fa4d4c8104 00:22:39.698 16:33:48 -- nvmf/common.sh@693 -- # digest=0 00:22:39.698 16:33:48 -- nvmf/common.sh@694 -- # python - 00:22:39.698 16:33:48 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.CtS 00:22:39.698 16:33:48 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.CtS 00:22:39.698 16:33:48 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.CtS 00:22:39.698 16:33:48 -- host/auth.sh@83 -- # gen_key sha256 32 00:22:39.698 16:33:48 -- host/auth.sh@53 -- # local digest len file key 00:22:39.699 16:33:48 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.699 16:33:48 -- host/auth.sh@54 -- # local -A digests 00:22:39.699 16:33:48 -- host/auth.sh@56 -- # digest=sha256 00:22:39.699 16:33:48 -- host/auth.sh@56 -- # len=32 00:22:39.699 16:33:48 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:39.699 16:33:48 -- host/auth.sh@57 -- # key=34d692f6445f195eedb40406e8de2ca4 00:22:39.699 16:33:48 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:22:39.699 16:33:48 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.Wjf 00:22:39.699 16:33:48 -- host/auth.sh@59 -- # format_dhchap_key 34d692f6445f195eedb40406e8de2ca4 1 00:22:39.699 16:33:48 -- nvmf/common.sh@708 -- # format_key DHHC-1 34d692f6445f195eedb40406e8de2ca4 1 00:22:39.699 16:33:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # key=34d692f6445f195eedb40406e8de2ca4 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # digest=1 00:22:39.699 16:33:48 -- nvmf/common.sh@694 -- # python - 00:22:39.699 16:33:48 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.Wjf 00:22:39.699 16:33:48 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.Wjf 00:22:39.699 16:33:48 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.Wjf 00:22:39.699 16:33:48 -- host/auth.sh@84 -- # gen_key sha384 48 00:22:39.699 16:33:48 -- host/auth.sh@53 -- # local digest len file key 00:22:39.699 16:33:48 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.699 16:33:48 -- host/auth.sh@54 -- # local -A digests 00:22:39.699 16:33:48 -- host/auth.sh@56 -- # digest=sha384 00:22:39.699 16:33:48 -- host/auth.sh@56 -- # len=48 00:22:39.699 16:33:48 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:39.699 16:33:48 -- host/auth.sh@57 -- # key=7963f0f62b84f963360b90d89192217f68f2b23222f879ef 00:22:39.699 16:33:48 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:22:39.699 16:33:48 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.syj 00:22:39.699 16:33:48 -- host/auth.sh@59 -- # format_dhchap_key 7963f0f62b84f963360b90d89192217f68f2b23222f879ef 2 00:22:39.699 16:33:48 -- nvmf/common.sh@708 -- # format_key DHHC-1 7963f0f62b84f963360b90d89192217f68f2b23222f879ef 2 00:22:39.699 16:33:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # key=7963f0f62b84f963360b90d89192217f68f2b23222f879ef 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # digest=2 00:22:39.699 16:33:48 -- nvmf/common.sh@694 -- # python - 00:22:39.699 16:33:48 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.syj 00:22:39.699 16:33:48 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.syj 00:22:39.699 16:33:48 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.syj 00:22:39.699 16:33:48 -- host/auth.sh@85 -- # gen_key sha512 64 00:22:39.699 16:33:48 -- host/auth.sh@53 -- # local digest len file key 00:22:39.699 16:33:48 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:39.699 16:33:48 -- host/auth.sh@54 -- # local -A digests 00:22:39.699 16:33:48 -- host/auth.sh@56 -- # digest=sha512 00:22:39.699 16:33:48 -- host/auth.sh@56 -- # len=64 00:22:39.699 16:33:48 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:39.699 16:33:48 -- host/auth.sh@57 -- # key=6000298403191b7e4cd558b8139df856e08a9520b93658aa12a9055981a6fddf 00:22:39.699 16:33:48 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:22:39.699 16:33:48 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.0Ot 00:22:39.699 16:33:48 -- host/auth.sh@59 -- # format_dhchap_key 6000298403191b7e4cd558b8139df856e08a9520b93658aa12a9055981a6fddf 3 00:22:39.699 16:33:48 -- nvmf/common.sh@708 -- # format_key DHHC-1 6000298403191b7e4cd558b8139df856e08a9520b93658aa12a9055981a6fddf 3 00:22:39.699 16:33:48 -- nvmf/common.sh@691 -- # local prefix key digest 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # key=6000298403191b7e4cd558b8139df856e08a9520b93658aa12a9055981a6fddf 00:22:39.699 16:33:48 -- nvmf/common.sh@693 -- # digest=3 00:22:39.699 16:33:48 -- nvmf/common.sh@694 -- # python - 00:22:39.699 16:33:48 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.0Ot 00:22:39.699 16:33:48 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.0Ot 00:22:39.699 16:33:48 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.0Ot 00:22:39.699 16:33:48 -- host/auth.sh@87 -- # waitforlisten 555560 00:22:39.699 16:33:48 -- common/autotest_common.sh@817 -- # '[' -z 555560 ']' 00:22:39.699 16:33:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.699 16:33:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:39.699 16:33:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.699 16:33:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:39.699 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:39.959 16:33:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:39.959 16:33:48 -- common/autotest_common.sh@850 -- # return 0 00:22:39.959 16:33:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:39.959 16:33:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.L6n 00:22:39.959 16:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.959 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:39.959 16:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.959 16:33:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:39.959 16:33:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.CtS 00:22:39.959 16:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.959 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:39.959 16:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.959 16:33:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:39.959 16:33:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Wjf 00:22:39.959 16:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.959 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:39.959 16:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.959 16:33:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:39.959 16:33:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.syj 00:22:39.959 16:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.959 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:39.959 16:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.959 16:33:48 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:22:39.959 16:33:48 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0Ot 00:22:39.959 16:33:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:39.959 16:33:48 -- common/autotest_common.sh@10 -- # set +x 00:22:39.959 16:33:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:39.959 16:33:48 -- host/auth.sh@92 -- # nvmet_auth_init 00:22:39.959 16:33:48 -- host/auth.sh@35 -- # get_main_ns_ip 00:22:39.959 16:33:48 -- nvmf/common.sh@717 -- # local ip 00:22:39.959 16:33:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:39.959 16:33:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:39.959 16:33:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:39.959 16:33:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:39.959 16:33:48 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:39.959 16:33:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:39.959 16:33:48 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:39.959 16:33:48 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:39.959 16:33:48 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:39.959 16:33:48 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:22:39.959 16:33:48 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:22:39.959 16:33:48 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:22:39.959 16:33:48 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:39.959 16:33:48 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:39.959 16:33:48 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:39.959 16:33:48 -- nvmf/common.sh@628 -- # local block nvme 00:22:39.959 16:33:48 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:22:39.959 16:33:48 -- nvmf/common.sh@631 -- # modprobe nvmet 00:22:39.959 16:33:48 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:39.959 16:33:48 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:43.244 Waiting for block devices as requested 00:22:43.502 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:22:43.502 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:22:43.760 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:43.760 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:43.760 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:44.019 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:44.019 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:44.019 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:44.019 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:44.277 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:44.277 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:22:44.277 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:44.535 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:44.535 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:44.535 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:44.794 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:44.794 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:44.794 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:45.052 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:45.987 16:33:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:45.987 16:33:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:45.987 16:33:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:22:45.987 16:33:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:22:45.987 16:33:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:45.987 16:33:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:45.987 16:33:54 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:22:45.987 16:33:54 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:45.987 16:33:54 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:45.987 No valid GPT data, bailing 00:22:45.987 16:33:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:45.987 16:33:54 -- scripts/common.sh@391 -- # pt= 00:22:45.987 16:33:54 -- scripts/common.sh@392 -- # return 1 00:22:45.987 16:33:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:22:45.987 16:33:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:45.987 16:33:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:45.987 16:33:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:22:45.987 16:33:54 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:22:45.987 16:33:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:45.987 16:33:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:45.987 16:33:54 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:22:45.987 16:33:54 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:45.988 16:33:54 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:22:45.988 No valid GPT data, bailing 00:22:45.988 16:33:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:45.988 16:33:54 -- scripts/common.sh@391 -- # pt= 00:22:45.988 16:33:54 -- scripts/common.sh@392 -- # return 1 00:22:45.988 16:33:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:22:45.988 16:33:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:22:45.988 16:33:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme2n1 ]] 00:22:45.988 16:33:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme2n1 00:22:45.988 16:33:54 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:22:45.988 16:33:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:45.988 16:33:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:22:45.988 16:33:54 -- nvmf/common.sh@642 -- # block_in_use nvme2n1 00:22:45.988 16:33:54 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:22:45.988 16:33:54 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:22:45.988 No valid GPT data, bailing 00:22:45.988 16:33:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:22:45.988 16:33:54 -- scripts/common.sh@391 -- # pt= 00:22:45.988 16:33:54 -- scripts/common.sh@392 -- # return 1 00:22:45.988 16:33:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme2n1 00:22:45.988 16:33:54 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme2n1 ]] 00:22:45.988 16:33:54 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:45.988 16:33:54 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:45.988 16:33:54 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:45.988 16:33:54 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:45.988 16:33:54 -- nvmf/common.sh@656 -- # echo 1 00:22:45.988 16:33:54 -- nvmf/common.sh@657 -- # echo /dev/nvme2n1 00:22:45.988 16:33:54 -- nvmf/common.sh@658 -- # echo 1 00:22:45.988 16:33:54 -- nvmf/common.sh@660 -- # echo 192.168.100.8 00:22:45.988 16:33:54 -- nvmf/common.sh@661 -- # echo rdma 00:22:45.988 16:33:54 -- nvmf/common.sh@662 -- # echo 4420 00:22:45.988 16:33:54 -- nvmf/common.sh@663 -- # echo ipv4 00:22:45.988 16:33:54 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:45.988 16:33:54 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -t rdma -s 4420 00:22:45.988 00:22:45.988 Discovery Log Number of Records 2, Generation counter 2 00:22:45.988 =====Discovery Log Entry 0====== 00:22:45.988 trtype: rdma 00:22:45.988 adrfam: ipv4 00:22:45.988 subtype: current discovery subsystem 00:22:45.988 treq: not specified, sq flow control disable supported 00:22:45.988 portid: 1 00:22:45.988 trsvcid: 4420 00:22:45.988 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:45.988 traddr: 192.168.100.8 00:22:45.988 eflags: none 00:22:45.988 rdma_prtype: not specified 00:22:45.988 rdma_qptype: connected 00:22:45.988 rdma_cms: rdma-cm 00:22:45.988 rdma_pkey: 0x0000 00:22:45.988 =====Discovery Log Entry 1====== 00:22:45.988 trtype: rdma 00:22:45.988 adrfam: ipv4 00:22:45.988 subtype: nvme subsystem 00:22:45.988 treq: not specified, sq flow control disable supported 00:22:45.988 portid: 1 00:22:45.988 trsvcid: 4420 00:22:45.988 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:45.988 traddr: 192.168.100.8 00:22:45.988 eflags: none 00:22:45.988 rdma_prtype: not specified 00:22:45.988 rdma_qptype: connected 00:22:45.988 rdma_cms: rdma-cm 00:22:45.988 rdma_pkey: 0x0000 00:22:45.988 16:33:54 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:45.988 16:33:55 -- host/auth.sh@37 -- # echo 0 00:22:45.988 16:33:55 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:45.988 16:33:55 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:45.988 16:33:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:45.988 16:33:55 -- host/auth.sh@44 -- # digest=sha256 00:22:45.988 16:33:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:45.988 16:33:55 -- host/auth.sh@44 -- # keyid=1 00:22:45.988 16:33:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:45.988 16:33:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:46.247 16:33:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:46.247 16:33:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:46.247 16:33:55 -- host/auth.sh@100 -- # IFS=, 00:22:46.247 16:33:55 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:22:46.247 16:33:55 -- host/auth.sh@100 -- # IFS=, 00:22:46.247 16:33:55 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.247 16:33:55 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:46.247 16:33:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:46.247 16:33:55 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:22:46.247 16:33:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.247 16:33:55 -- host/auth.sh@68 -- # keyid=1 00:22:46.247 16:33:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:46.247 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.247 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.247 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.247 16:33:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:46.247 16:33:55 -- nvmf/common.sh@717 -- # local ip 00:22:46.247 16:33:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:46.247 16:33:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:46.247 16:33:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.247 16:33:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.247 16:33:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:46.247 16:33:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.247 16:33:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.247 16:33:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:46.247 16:33:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:46.247 16:33:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:46.247 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.247 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.505 nvme0n1 00:22:46.505 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.505 16:33:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.505 16:33:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:46.505 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.505 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.505 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.505 16:33:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.505 16:33:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.505 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.505 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.505 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.505 16:33:55 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:22:46.505 16:33:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:46.505 16:33:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:46.505 16:33:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:46.505 16:33:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:46.505 16:33:55 -- host/auth.sh@44 -- # digest=sha256 00:22:46.505 16:33:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.505 16:33:55 -- host/auth.sh@44 -- # keyid=0 00:22:46.505 16:33:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:46.505 16:33:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:46.505 16:33:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:46.505 16:33:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:46.505 16:33:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:22:46.505 16:33:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:46.505 16:33:55 -- host/auth.sh@68 -- # digest=sha256 00:22:46.505 16:33:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:46.505 16:33:55 -- host/auth.sh@68 -- # keyid=0 00:22:46.505 16:33:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.505 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.505 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.505 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.505 16:33:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:46.505 16:33:55 -- nvmf/common.sh@717 -- # local ip 00:22:46.505 16:33:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:46.505 16:33:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:46.505 16:33:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.505 16:33:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.505 16:33:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:46.505 16:33:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.505 16:33:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.505 16:33:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:46.505 16:33:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:46.505 16:33:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:46.505 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.505 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.505 nvme0n1 00:22:46.505 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.505 16:33:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.505 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.505 16:33:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:46.505 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.505 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.764 16:33:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.764 16:33:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.764 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.764 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.764 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.764 16:33:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:46.764 16:33:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:46.764 16:33:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:46.764 16:33:55 -- host/auth.sh@44 -- # digest=sha256 00:22:46.764 16:33:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:46.764 16:33:55 -- host/auth.sh@44 -- # keyid=1 00:22:46.764 16:33:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:46.764 16:33:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:46.764 16:33:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:46.764 16:33:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:46.764 16:33:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:22:46.764 16:33:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:46.764 16:33:55 -- host/auth.sh@68 -- # digest=sha256 00:22:46.764 16:33:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:46.764 16:33:55 -- host/auth.sh@68 -- # keyid=1 00:22:46.764 16:33:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:46.764 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.764 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.764 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.764 16:33:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:46.764 16:33:55 -- nvmf/common.sh@717 -- # local ip 00:22:46.764 16:33:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:46.764 16:33:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:46.764 16:33:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.764 16:33:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.764 16:33:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:46.764 16:33:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:46.764 16:33:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:46.764 16:33:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:46.764 16:33:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:46.764 16:33:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:46.764 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.764 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.764 nvme0n1 00:22:46.764 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.764 16:33:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:46.764 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.764 16:33:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:46.764 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:46.764 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.764 16:33:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.764 16:33:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:46.764 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.764 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:47.023 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.023 16:33:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:47.023 16:33:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:47.023 16:33:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:47.023 16:33:55 -- host/auth.sh@44 -- # digest=sha256 00:22:47.023 16:33:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:47.023 16:33:55 -- host/auth.sh@44 -- # keyid=2 00:22:47.023 16:33:55 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:47.023 16:33:55 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:47.023 16:33:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:47.023 16:33:55 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:47.023 16:33:55 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:22:47.023 16:33:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:47.023 16:33:55 -- host/auth.sh@68 -- # digest=sha256 00:22:47.023 16:33:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:47.023 16:33:55 -- host/auth.sh@68 -- # keyid=2 00:22:47.023 16:33:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.023 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.023 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:47.023 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.023 16:33:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:47.023 16:33:55 -- nvmf/common.sh@717 -- # local ip 00:22:47.023 16:33:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:47.023 16:33:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:47.023 16:33:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.023 16:33:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.023 16:33:55 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:47.023 16:33:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.023 16:33:55 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.023 16:33:55 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:47.023 16:33:55 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:47.023 16:33:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:47.023 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.023 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:47.023 nvme0n1 00:22:47.023 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.023 16:33:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.023 16:33:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.023 16:33:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:47.023 16:33:55 -- common/autotest_common.sh@10 -- # set +x 00:22:47.023 16:33:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.023 16:33:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.023 16:33:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.023 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.023 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.023 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.023 16:33:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:47.023 16:33:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:47.023 16:33:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:47.023 16:33:56 -- host/auth.sh@44 -- # digest=sha256 00:22:47.023 16:33:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:47.023 16:33:56 -- host/auth.sh@44 -- # keyid=3 00:22:47.023 16:33:56 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:47.023 16:33:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:47.023 16:33:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:47.023 16:33:56 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:47.283 16:33:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:22:47.283 16:33:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:47.283 16:33:56 -- host/auth.sh@68 -- # digest=sha256 00:22:47.283 16:33:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:47.283 16:33:56 -- host/auth.sh@68 -- # keyid=3 00:22:47.283 16:33:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.283 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.283 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.283 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.283 16:33:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:47.283 16:33:56 -- nvmf/common.sh@717 -- # local ip 00:22:47.283 16:33:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:47.283 16:33:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:47.283 16:33:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.283 16:33:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.283 16:33:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:47.283 16:33:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.283 16:33:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.283 16:33:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:47.283 16:33:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:47.283 16:33:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:47.283 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.283 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.283 nvme0n1 00:22:47.283 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.283 16:33:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.283 16:33:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:47.283 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.283 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.283 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.283 16:33:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.283 16:33:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.283 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.283 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.283 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.283 16:33:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:47.283 16:33:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:47.283 16:33:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:47.283 16:33:56 -- host/auth.sh@44 -- # digest=sha256 00:22:47.283 16:33:56 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:47.283 16:33:56 -- host/auth.sh@44 -- # keyid=4 00:22:47.283 16:33:56 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:47.283 16:33:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:47.283 16:33:56 -- host/auth.sh@48 -- # echo ffdhe2048 00:22:47.283 16:33:56 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:47.283 16:33:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:22:47.283 16:33:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:47.283 16:33:56 -- host/auth.sh@68 -- # digest=sha256 00:22:47.283 16:33:56 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:22:47.283 16:33:56 -- host/auth.sh@68 -- # keyid=4 00:22:47.283 16:33:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:47.283 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.283 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.283 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.283 16:33:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:47.283 16:33:56 -- nvmf/common.sh@717 -- # local ip 00:22:47.283 16:33:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:47.283 16:33:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:47.283 16:33:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.283 16:33:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.283 16:33:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:47.283 16:33:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.283 16:33:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.283 16:33:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:47.283 16:33:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:47.283 16:33:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:47.283 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.283 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.542 nvme0n1 00:22:47.542 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.542 16:33:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:47.542 16:33:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:47.542 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.542 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.542 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.542 16:33:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.542 16:33:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:47.542 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.542 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.542 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.542 16:33:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.542 16:33:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:47.542 16:33:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:47.542 16:33:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:47.542 16:33:56 -- host/auth.sh@44 -- # digest=sha256 00:22:47.542 16:33:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:47.542 16:33:56 -- host/auth.sh@44 -- # keyid=0 00:22:47.542 16:33:56 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:47.542 16:33:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:47.542 16:33:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:47.801 16:33:56 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:47.801 16:33:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:22:47.801 16:33:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:47.801 16:33:56 -- host/auth.sh@68 -- # digest=sha256 00:22:47.801 16:33:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:47.801 16:33:56 -- host/auth.sh@68 -- # keyid=0 00:22:47.801 16:33:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:47.801 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.801 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:47.801 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.801 16:33:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:47.801 16:33:56 -- nvmf/common.sh@717 -- # local ip 00:22:47.801 16:33:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:47.801 16:33:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:47.801 16:33:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:47.801 16:33:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:47.801 16:33:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:47.801 16:33:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:47.801 16:33:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:47.801 16:33:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:47.801 16:33:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:47.801 16:33:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:47.801 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.801 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:48.061 nvme0n1 00:22:48.061 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.061 16:33:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.061 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.061 16:33:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:48.061 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:48.061 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.061 16:33:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.061 16:33:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.061 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.061 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:48.061 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.061 16:33:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:48.061 16:33:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:48.061 16:33:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:48.061 16:33:56 -- host/auth.sh@44 -- # digest=sha256 00:22:48.061 16:33:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.061 16:33:56 -- host/auth.sh@44 -- # keyid=1 00:22:48.061 16:33:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:48.061 16:33:56 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:48.061 16:33:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:48.061 16:33:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:48.061 16:33:56 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:22:48.061 16:33:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:48.061 16:33:56 -- host/auth.sh@68 -- # digest=sha256 00:22:48.061 16:33:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:48.061 16:33:56 -- host/auth.sh@68 -- # keyid=1 00:22:48.061 16:33:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.061 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.061 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:48.061 16:33:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.061 16:33:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:48.061 16:33:56 -- nvmf/common.sh@717 -- # local ip 00:22:48.061 16:33:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:48.061 16:33:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:48.061 16:33:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.061 16:33:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.061 16:33:56 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:48.061 16:33:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.061 16:33:56 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.061 16:33:56 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:48.061 16:33:56 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:48.061 16:33:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:48.061 16:33:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.061 16:33:56 -- common/autotest_common.sh@10 -- # set +x 00:22:48.321 nvme0n1 00:22:48.321 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.321 16:33:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.321 16:33:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:48.321 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.321 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.321 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.321 16:33:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.321 16:33:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.321 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.321 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.321 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.321 16:33:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:48.321 16:33:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:48.321 16:33:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:48.321 16:33:57 -- host/auth.sh@44 -- # digest=sha256 00:22:48.321 16:33:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.321 16:33:57 -- host/auth.sh@44 -- # keyid=2 00:22:48.321 16:33:57 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:48.321 16:33:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:48.321 16:33:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:48.321 16:33:57 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:48.321 16:33:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:22:48.321 16:33:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:48.321 16:33:57 -- host/auth.sh@68 -- # digest=sha256 00:22:48.321 16:33:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:48.321 16:33:57 -- host/auth.sh@68 -- # keyid=2 00:22:48.321 16:33:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.321 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.321 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.321 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.321 16:33:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:48.321 16:33:57 -- nvmf/common.sh@717 -- # local ip 00:22:48.321 16:33:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:48.321 16:33:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:48.321 16:33:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.321 16:33:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.321 16:33:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:48.321 16:33:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.321 16:33:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.321 16:33:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:48.321 16:33:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:48.321 16:33:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:48.321 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.321 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.580 nvme0n1 00:22:48.580 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.580 16:33:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.580 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.580 16:33:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:48.580 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.580 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.580 16:33:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.580 16:33:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.580 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.580 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.580 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.580 16:33:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:48.580 16:33:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:48.580 16:33:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:48.580 16:33:57 -- host/auth.sh@44 -- # digest=sha256 00:22:48.580 16:33:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.580 16:33:57 -- host/auth.sh@44 -- # keyid=3 00:22:48.580 16:33:57 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:48.580 16:33:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:48.580 16:33:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:48.580 16:33:57 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:48.580 16:33:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:22:48.580 16:33:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:48.580 16:33:57 -- host/auth.sh@68 -- # digest=sha256 00:22:48.580 16:33:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:48.580 16:33:57 -- host/auth.sh@68 -- # keyid=3 00:22:48.580 16:33:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.580 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.580 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.580 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.580 16:33:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:48.580 16:33:57 -- nvmf/common.sh@717 -- # local ip 00:22:48.580 16:33:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:48.580 16:33:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:48.580 16:33:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.580 16:33:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.580 16:33:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:48.580 16:33:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.580 16:33:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.580 16:33:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:48.580 16:33:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:48.580 16:33:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:48.580 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.580 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.839 nvme0n1 00:22:48.839 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.839 16:33:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:48.839 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.839 16:33:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:48.839 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.839 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.839 16:33:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.839 16:33:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:48.839 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.839 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.839 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.839 16:33:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:48.839 16:33:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:48.839 16:33:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:48.839 16:33:57 -- host/auth.sh@44 -- # digest=sha256 00:22:48.839 16:33:57 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:48.839 16:33:57 -- host/auth.sh@44 -- # keyid=4 00:22:48.839 16:33:57 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:48.839 16:33:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:48.839 16:33:57 -- host/auth.sh@48 -- # echo ffdhe3072 00:22:48.839 16:33:57 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:48.839 16:33:57 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:22:48.839 16:33:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:48.839 16:33:57 -- host/auth.sh@68 -- # digest=sha256 00:22:48.839 16:33:57 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:22:48.839 16:33:57 -- host/auth.sh@68 -- # keyid=4 00:22:48.839 16:33:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:48.839 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.839 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:48.839 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.839 16:33:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:48.839 16:33:57 -- nvmf/common.sh@717 -- # local ip 00:22:48.839 16:33:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:48.839 16:33:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:48.839 16:33:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.839 16:33:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.839 16:33:57 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:48.839 16:33:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:48.839 16:33:57 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:48.839 16:33:57 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:48.839 16:33:57 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:48.839 16:33:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:48.839 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.839 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:49.098 nvme0n1 00:22:49.098 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.098 16:33:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.098 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.098 16:33:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:49.098 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:49.098 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.098 16:33:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.098 16:33:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.098 16:33:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.098 16:33:57 -- common/autotest_common.sh@10 -- # set +x 00:22:49.098 16:33:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.098 16:33:57 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:49.098 16:33:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:49.098 16:33:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:49.098 16:33:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:49.098 16:33:57 -- host/auth.sh@44 -- # digest=sha256 00:22:49.098 16:33:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:49.098 16:33:57 -- host/auth.sh@44 -- # keyid=0 00:22:49.098 16:33:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:49.098 16:33:57 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:49.098 16:33:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:49.358 16:33:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:49.358 16:33:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:22:49.358 16:33:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:49.358 16:33:58 -- host/auth.sh@68 -- # digest=sha256 00:22:49.358 16:33:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:49.358 16:33:58 -- host/auth.sh@68 -- # keyid=0 00:22:49.358 16:33:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.358 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.358 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:49.358 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.358 16:33:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:49.358 16:33:58 -- nvmf/common.sh@717 -- # local ip 00:22:49.358 16:33:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:49.358 16:33:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:49.358 16:33:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.358 16:33:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.358 16:33:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:49.358 16:33:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.358 16:33:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.358 16:33:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:49.358 16:33:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:49.358 16:33:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:49.358 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.358 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:49.617 nvme0n1 00:22:49.617 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.617 16:33:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.617 16:33:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:49.617 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.617 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:49.617 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.617 16:33:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.617 16:33:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:49.617 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.617 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:49.876 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.876 16:33:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:49.876 16:33:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:49.876 16:33:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:49.876 16:33:58 -- host/auth.sh@44 -- # digest=sha256 00:22:49.876 16:33:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:49.876 16:33:58 -- host/auth.sh@44 -- # keyid=1 00:22:49.876 16:33:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:49.876 16:33:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:49.876 16:33:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:49.876 16:33:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:49.876 16:33:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:22:49.876 16:33:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:49.876 16:33:58 -- host/auth.sh@68 -- # digest=sha256 00:22:49.876 16:33:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:49.876 16:33:58 -- host/auth.sh@68 -- # keyid=1 00:22:49.876 16:33:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:49.876 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.876 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:49.876 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.876 16:33:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:49.876 16:33:58 -- nvmf/common.sh@717 -- # local ip 00:22:49.876 16:33:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:49.876 16:33:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:49.876 16:33:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:49.876 16:33:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:49.876 16:33:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:49.876 16:33:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:49.876 16:33:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:49.876 16:33:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:49.876 16:33:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:49.876 16:33:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:49.876 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.876 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:49.876 nvme0n1 00:22:49.876 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.876 16:33:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:49.876 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.876 16:33:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:49.876 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:49.876 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.136 16:33:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.136 16:33:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.136 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.136 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:50.136 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.136 16:33:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:50.136 16:33:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:50.136 16:33:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:50.136 16:33:58 -- host/auth.sh@44 -- # digest=sha256 00:22:50.136 16:33:58 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.136 16:33:58 -- host/auth.sh@44 -- # keyid=2 00:22:50.136 16:33:58 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:50.136 16:33:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:50.136 16:33:58 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:50.136 16:33:58 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:50.136 16:33:58 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:22:50.136 16:33:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:50.136 16:33:58 -- host/auth.sh@68 -- # digest=sha256 00:22:50.136 16:33:58 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:50.136 16:33:58 -- host/auth.sh@68 -- # keyid=2 00:22:50.136 16:33:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:50.136 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.136 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:50.136 16:33:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.136 16:33:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:50.136 16:33:58 -- nvmf/common.sh@717 -- # local ip 00:22:50.136 16:33:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:50.136 16:33:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:50.136 16:33:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.136 16:33:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.136 16:33:58 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:50.136 16:33:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.136 16:33:58 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.136 16:33:58 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:50.136 16:33:58 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:50.136 16:33:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:50.136 16:33:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.136 16:33:58 -- common/autotest_common.sh@10 -- # set +x 00:22:50.396 nvme0n1 00:22:50.396 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.396 16:33:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.396 16:33:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:50.396 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.396 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.396 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.396 16:33:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.396 16:33:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.396 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.396 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.396 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.396 16:33:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:50.396 16:33:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:50.396 16:33:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:50.396 16:33:59 -- host/auth.sh@44 -- # digest=sha256 00:22:50.396 16:33:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.396 16:33:59 -- host/auth.sh@44 -- # keyid=3 00:22:50.396 16:33:59 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:50.396 16:33:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:50.396 16:33:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:50.396 16:33:59 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:50.396 16:33:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:22:50.396 16:33:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:50.396 16:33:59 -- host/auth.sh@68 -- # digest=sha256 00:22:50.396 16:33:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:50.396 16:33:59 -- host/auth.sh@68 -- # keyid=3 00:22:50.396 16:33:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:50.396 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.396 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.396 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.396 16:33:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:50.396 16:33:59 -- nvmf/common.sh@717 -- # local ip 00:22:50.396 16:33:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:50.396 16:33:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:50.396 16:33:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.396 16:33:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.396 16:33:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:50.396 16:33:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.396 16:33:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.396 16:33:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:50.396 16:33:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:50.396 16:33:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:50.396 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.396 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.656 nvme0n1 00:22:50.656 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.656 16:33:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.656 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.656 16:33:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:50.656 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.656 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.656 16:33:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.656 16:33:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.656 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.656 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.656 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.656 16:33:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:50.656 16:33:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:50.656 16:33:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:50.656 16:33:59 -- host/auth.sh@44 -- # digest=sha256 00:22:50.656 16:33:59 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:50.656 16:33:59 -- host/auth.sh@44 -- # keyid=4 00:22:50.656 16:33:59 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:50.656 16:33:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:50.656 16:33:59 -- host/auth.sh@48 -- # echo ffdhe4096 00:22:50.656 16:33:59 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:50.656 16:33:59 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:22:50.656 16:33:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:50.656 16:33:59 -- host/auth.sh@68 -- # digest=sha256 00:22:50.656 16:33:59 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:22:50.656 16:33:59 -- host/auth.sh@68 -- # keyid=4 00:22:50.656 16:33:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:50.656 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.656 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.656 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.656 16:33:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:50.656 16:33:59 -- nvmf/common.sh@717 -- # local ip 00:22:50.656 16:33:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:50.656 16:33:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:50.656 16:33:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:50.656 16:33:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:50.656 16:33:59 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:50.656 16:33:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:50.656 16:33:59 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:50.656 16:33:59 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:50.656 16:33:59 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:50.656 16:33:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:50.656 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.656 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.916 nvme0n1 00:22:50.916 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.916 16:33:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:50.916 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.916 16:33:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:50.916 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.916 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.916 16:33:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.916 16:33:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:50.916 16:33:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.916 16:33:59 -- common/autotest_common.sh@10 -- # set +x 00:22:50.916 16:33:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.916 16:33:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:50.916 16:33:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:50.916 16:33:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:50.916 16:33:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:50.916 16:33:59 -- host/auth.sh@44 -- # digest=sha256 00:22:50.916 16:33:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:50.916 16:33:59 -- host/auth.sh@44 -- # keyid=0 00:22:50.916 16:33:59 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:50.916 16:33:59 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:50.916 16:33:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:52.293 16:34:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:52.293 16:34:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:22:52.293 16:34:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:52.293 16:34:01 -- host/auth.sh@68 -- # digest=sha256 00:22:52.293 16:34:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:52.293 16:34:01 -- host/auth.sh@68 -- # keyid=0 00:22:52.293 16:34:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:52.293 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.293 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:52.293 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.293 16:34:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:52.293 16:34:01 -- nvmf/common.sh@717 -- # local ip 00:22:52.293 16:34:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:52.293 16:34:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:52.293 16:34:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.293 16:34:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.293 16:34:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:52.293 16:34:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:52.293 16:34:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:52.293 16:34:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:52.293 16:34:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:52.293 16:34:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:52.293 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.293 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:52.552 nvme0n1 00:22:52.552 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.552 16:34:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:52.552 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.552 16:34:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:52.552 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:52.552 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.552 16:34:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.552 16:34:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:52.552 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.552 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:52.552 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.552 16:34:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:52.552 16:34:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:52.552 16:34:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:52.552 16:34:01 -- host/auth.sh@44 -- # digest=sha256 00:22:52.552 16:34:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:52.552 16:34:01 -- host/auth.sh@44 -- # keyid=1 00:22:52.552 16:34:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:52.552 16:34:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:52.552 16:34:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:52.552 16:34:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:52.552 16:34:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:22:52.552 16:34:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:52.552 16:34:01 -- host/auth.sh@68 -- # digest=sha256 00:22:52.552 16:34:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:52.552 16:34:01 -- host/auth.sh@68 -- # keyid=1 00:22:52.552 16:34:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:52.552 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.552 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:52.553 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.553 16:34:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:52.553 16:34:01 -- nvmf/common.sh@717 -- # local ip 00:22:52.553 16:34:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:52.553 16:34:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:52.553 16:34:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.553 16:34:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.553 16:34:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:52.553 16:34:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:52.553 16:34:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:52.553 16:34:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:52.553 16:34:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:52.553 16:34:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:52.553 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.553 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:53.121 nvme0n1 00:22:53.121 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.121 16:34:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.121 16:34:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:53.121 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.121 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:53.121 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.121 16:34:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.121 16:34:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.121 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.121 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:53.121 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.121 16:34:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:53.121 16:34:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:22:53.121 16:34:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:53.121 16:34:01 -- host/auth.sh@44 -- # digest=sha256 00:22:53.121 16:34:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.121 16:34:01 -- host/auth.sh@44 -- # keyid=2 00:22:53.121 16:34:01 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:53.121 16:34:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:53.121 16:34:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:53.121 16:34:01 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:53.121 16:34:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:22:53.121 16:34:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:53.121 16:34:01 -- host/auth.sh@68 -- # digest=sha256 00:22:53.121 16:34:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:53.121 16:34:01 -- host/auth.sh@68 -- # keyid=2 00:22:53.121 16:34:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:53.121 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.121 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:53.121 16:34:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.121 16:34:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:53.121 16:34:01 -- nvmf/common.sh@717 -- # local ip 00:22:53.121 16:34:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:53.121 16:34:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:53.121 16:34:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.121 16:34:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.121 16:34:01 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:53.121 16:34:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.121 16:34:01 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.121 16:34:01 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:53.121 16:34:01 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:53.121 16:34:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:53.121 16:34:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.121 16:34:01 -- common/autotest_common.sh@10 -- # set +x 00:22:53.381 nvme0n1 00:22:53.381 16:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.381 16:34:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.381 16:34:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:53.381 16:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.381 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:22:53.381 16:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.381 16:34:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.381 16:34:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.381 16:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.381 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:22:53.381 16:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.381 16:34:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:53.381 16:34:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:22:53.381 16:34:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:53.381 16:34:02 -- host/auth.sh@44 -- # digest=sha256 00:22:53.381 16:34:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.381 16:34:02 -- host/auth.sh@44 -- # keyid=3 00:22:53.381 16:34:02 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:53.381 16:34:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:53.381 16:34:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:53.381 16:34:02 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:53.381 16:34:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:22:53.381 16:34:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:53.381 16:34:02 -- host/auth.sh@68 -- # digest=sha256 00:22:53.381 16:34:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:53.381 16:34:02 -- host/auth.sh@68 -- # keyid=3 00:22:53.381 16:34:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:53.381 16:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.381 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:22:53.381 16:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.381 16:34:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:53.381 16:34:02 -- nvmf/common.sh@717 -- # local ip 00:22:53.381 16:34:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:53.381 16:34:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:53.381 16:34:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.381 16:34:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.381 16:34:02 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:53.381 16:34:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.381 16:34:02 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.381 16:34:02 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:53.381 16:34:02 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:53.381 16:34:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:53.381 16:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.381 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:22:53.949 nvme0n1 00:22:53.949 16:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.949 16:34:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.949 16:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.949 16:34:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:53.949 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:22:53.949 16:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.949 16:34:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.949 16:34:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.949 16:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.949 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:22:53.949 16:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.949 16:34:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:53.949 16:34:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:22:53.949 16:34:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:53.949 16:34:02 -- host/auth.sh@44 -- # digest=sha256 00:22:53.949 16:34:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:53.949 16:34:02 -- host/auth.sh@44 -- # keyid=4 00:22:53.949 16:34:02 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:53.949 16:34:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:53.949 16:34:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:22:53.949 16:34:02 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:53.949 16:34:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:22:53.949 16:34:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:53.949 16:34:02 -- host/auth.sh@68 -- # digest=sha256 00:22:53.949 16:34:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:22:53.949 16:34:02 -- host/auth.sh@68 -- # keyid=4 00:22:53.949 16:34:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:53.949 16:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.949 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:22:53.949 16:34:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.949 16:34:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:53.949 16:34:02 -- nvmf/common.sh@717 -- # local ip 00:22:53.949 16:34:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:53.949 16:34:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:53.949 16:34:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.949 16:34:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.949 16:34:02 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:53.949 16:34:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:53.949 16:34:02 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:53.949 16:34:02 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:53.949 16:34:02 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:53.949 16:34:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:53.949 16:34:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.949 16:34:02 -- common/autotest_common.sh@10 -- # set +x 00:22:54.209 nvme0n1 00:22:54.209 16:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.209 16:34:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.209 16:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.209 16:34:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:54.209 16:34:03 -- common/autotest_common.sh@10 -- # set +x 00:22:54.209 16:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.209 16:34:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.209 16:34:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.209 16:34:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.209 16:34:03 -- common/autotest_common.sh@10 -- # set +x 00:22:54.209 16:34:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.209 16:34:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.209 16:34:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:54.209 16:34:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:22:54.209 16:34:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:54.209 16:34:03 -- host/auth.sh@44 -- # digest=sha256 00:22:54.209 16:34:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:54.209 16:34:03 -- host/auth.sh@44 -- # keyid=0 00:22:54.209 16:34:03 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:54.209 16:34:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:54.209 16:34:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:57.529 16:34:05 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:22:57.529 16:34:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:22:57.529 16:34:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:57.529 16:34:05 -- host/auth.sh@68 -- # digest=sha256 00:22:57.529 16:34:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:57.529 16:34:05 -- host/auth.sh@68 -- # keyid=0 00:22:57.529 16:34:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:57.529 16:34:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.529 16:34:05 -- common/autotest_common.sh@10 -- # set +x 00:22:57.529 16:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.529 16:34:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:57.529 16:34:06 -- nvmf/common.sh@717 -- # local ip 00:22:57.529 16:34:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:57.529 16:34:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:57.529 16:34:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.529 16:34:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.529 16:34:06 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:57.529 16:34:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:57.529 16:34:06 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:57.529 16:34:06 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:57.529 16:34:06 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:57.529 16:34:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:22:57.529 16:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.529 16:34:06 -- common/autotest_common.sh@10 -- # set +x 00:22:57.529 nvme0n1 00:22:57.529 16:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.529 16:34:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.529 16:34:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:57.529 16:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.529 16:34:06 -- common/autotest_common.sh@10 -- # set +x 00:22:57.529 16:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.844 16:34:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.844 16:34:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.844 16:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.844 16:34:06 -- common/autotest_common.sh@10 -- # set +x 00:22:57.844 16:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.844 16:34:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:57.844 16:34:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:22:57.844 16:34:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:57.844 16:34:06 -- host/auth.sh@44 -- # digest=sha256 00:22:57.844 16:34:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:57.844 16:34:06 -- host/auth.sh@44 -- # keyid=1 00:22:57.844 16:34:06 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:57.844 16:34:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:57.844 16:34:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:57.844 16:34:06 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:22:57.844 16:34:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:22:57.844 16:34:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:57.844 16:34:06 -- host/auth.sh@68 -- # digest=sha256 00:22:57.844 16:34:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:57.844 16:34:06 -- host/auth.sh@68 -- # keyid=1 00:22:57.844 16:34:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:57.844 16:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.844 16:34:06 -- common/autotest_common.sh@10 -- # set +x 00:22:57.844 16:34:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:57.844 16:34:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:57.844 16:34:06 -- nvmf/common.sh@717 -- # local ip 00:22:57.844 16:34:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:57.844 16:34:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:57.844 16:34:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.844 16:34:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.844 16:34:06 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:57.844 16:34:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:57.844 16:34:06 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:57.844 16:34:06 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:57.844 16:34:06 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:57.844 16:34:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:22:57.844 16:34:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:57.844 16:34:06 -- common/autotest_common.sh@10 -- # set +x 00:22:58.109 nvme0n1 00:22:58.109 16:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.109 16:34:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.377 16:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.377 16:34:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:58.377 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:22:58.377 16:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.377 16:34:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.377 16:34:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.377 16:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.377 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:22:58.377 16:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.377 16:34:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:58.377 16:34:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:22:58.377 16:34:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:58.377 16:34:07 -- host/auth.sh@44 -- # digest=sha256 00:22:58.377 16:34:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:58.377 16:34:07 -- host/auth.sh@44 -- # keyid=2 00:22:58.377 16:34:07 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:58.377 16:34:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:58.377 16:34:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:58.377 16:34:07 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:22:58.377 16:34:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:22:58.377 16:34:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:58.377 16:34:07 -- host/auth.sh@68 -- # digest=sha256 00:22:58.377 16:34:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:58.377 16:34:07 -- host/auth.sh@68 -- # keyid=2 00:22:58.377 16:34:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:58.377 16:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.377 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:22:58.377 16:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.377 16:34:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:58.377 16:34:07 -- nvmf/common.sh@717 -- # local ip 00:22:58.377 16:34:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:58.377 16:34:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:58.377 16:34:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.377 16:34:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.377 16:34:07 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:58.377 16:34:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.377 16:34:07 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.377 16:34:07 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:58.377 16:34:07 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:58.377 16:34:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:22:58.377 16:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.377 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:22:58.942 nvme0n1 00:22:58.942 16:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.942 16:34:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:58.942 16:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.942 16:34:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:58.942 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:22:58.942 16:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.942 16:34:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.942 16:34:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:58.942 16:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.942 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:22:58.942 16:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.942 16:34:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:58.942 16:34:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:22:58.942 16:34:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:58.942 16:34:07 -- host/auth.sh@44 -- # digest=sha256 00:22:58.942 16:34:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:58.942 16:34:07 -- host/auth.sh@44 -- # keyid=3 00:22:58.942 16:34:07 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:58.942 16:34:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:58.942 16:34:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:58.942 16:34:07 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:22:58.942 16:34:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:22:58.942 16:34:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:58.942 16:34:07 -- host/auth.sh@68 -- # digest=sha256 00:22:58.942 16:34:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:58.942 16:34:07 -- host/auth.sh@68 -- # keyid=3 00:22:58.942 16:34:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:58.942 16:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.942 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:22:58.942 16:34:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.942 16:34:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:58.942 16:34:07 -- nvmf/common.sh@717 -- # local ip 00:22:58.942 16:34:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:58.942 16:34:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:58.942 16:34:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.942 16:34:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.942 16:34:07 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:58.942 16:34:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.942 16:34:07 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.942 16:34:07 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:58.942 16:34:07 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:58.942 16:34:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:22:58.942 16:34:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.942 16:34:07 -- common/autotest_common.sh@10 -- # set +x 00:22:59.507 nvme0n1 00:22:59.507 16:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.507 16:34:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.507 16:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.507 16:34:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:22:59.507 16:34:08 -- common/autotest_common.sh@10 -- # set +x 00:22:59.507 16:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.507 16:34:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.507 16:34:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.507 16:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.507 16:34:08 -- common/autotest_common.sh@10 -- # set +x 00:22:59.507 16:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.507 16:34:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:22:59.507 16:34:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:22:59.507 16:34:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:22:59.507 16:34:08 -- host/auth.sh@44 -- # digest=sha256 00:22:59.507 16:34:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:22:59.507 16:34:08 -- host/auth.sh@44 -- # keyid=4 00:22:59.507 16:34:08 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:59.507 16:34:08 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:22:59.507 16:34:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:22:59.507 16:34:08 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:22:59.507 16:34:08 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:22:59.507 16:34:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:22:59.507 16:34:08 -- host/auth.sh@68 -- # digest=sha256 00:22:59.507 16:34:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:22:59.507 16:34:08 -- host/auth.sh@68 -- # keyid=4 00:22:59.507 16:34:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:59.507 16:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.507 16:34:08 -- common/autotest_common.sh@10 -- # set +x 00:22:59.507 16:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.507 16:34:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:22:59.507 16:34:08 -- nvmf/common.sh@717 -- # local ip 00:22:59.507 16:34:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:22:59.507 16:34:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:22:59.507 16:34:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.507 16:34:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.507 16:34:08 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:22:59.507 16:34:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:59.507 16:34:08 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:22:59.507 16:34:08 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:22:59.507 16:34:08 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:22:59.507 16:34:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:59.507 16:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.507 16:34:08 -- common/autotest_common.sh@10 -- # set +x 00:23:00.072 nvme0n1 00:23:00.072 16:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.072 16:34:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.072 16:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.072 16:34:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:00.072 16:34:08 -- common/autotest_common.sh@10 -- # set +x 00:23:00.072 16:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.072 16:34:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.072 16:34:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.072 16:34:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.072 16:34:08 -- common/autotest_common.sh@10 -- # set +x 00:23:00.072 16:34:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.072 16:34:08 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:00.072 16:34:08 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:00.072 16:34:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:00.072 16:34:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:00.072 16:34:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:00.072 16:34:08 -- host/auth.sh@44 -- # digest=sha384 00:23:00.072 16:34:08 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.072 16:34:08 -- host/auth.sh@44 -- # keyid=0 00:23:00.072 16:34:08 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:00.072 16:34:08 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:00.072 16:34:08 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:00.072 16:34:08 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:00.072 16:34:08 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:23:00.072 16:34:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:00.072 16:34:08 -- host/auth.sh@68 -- # digest=sha384 00:23:00.073 16:34:08 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:00.073 16:34:08 -- host/auth.sh@68 -- # keyid=0 00:23:00.073 16:34:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:00.073 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.073 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.073 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.073 16:34:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:00.073 16:34:09 -- nvmf/common.sh@717 -- # local ip 00:23:00.073 16:34:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:00.073 16:34:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:00.073 16:34:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.073 16:34:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.073 16:34:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:00.073 16:34:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.073 16:34:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.073 16:34:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:00.073 16:34:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:00.073 16:34:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:00.073 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.073 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.330 nvme0n1 00:23:00.330 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.330 16:34:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.330 16:34:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:00.330 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.330 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.330 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.330 16:34:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.330 16:34:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.330 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.330 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.330 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.330 16:34:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:00.330 16:34:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:00.330 16:34:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:00.330 16:34:09 -- host/auth.sh@44 -- # digest=sha384 00:23:00.330 16:34:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.330 16:34:09 -- host/auth.sh@44 -- # keyid=1 00:23:00.330 16:34:09 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:00.330 16:34:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:00.330 16:34:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:00.330 16:34:09 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:00.330 16:34:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:23:00.330 16:34:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:00.330 16:34:09 -- host/auth.sh@68 -- # digest=sha384 00:23:00.330 16:34:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:00.330 16:34:09 -- host/auth.sh@68 -- # keyid=1 00:23:00.330 16:34:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:00.330 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.330 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.330 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.330 16:34:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:00.330 16:34:09 -- nvmf/common.sh@717 -- # local ip 00:23:00.330 16:34:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:00.330 16:34:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:00.330 16:34:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.330 16:34:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.330 16:34:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:00.330 16:34:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.330 16:34:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.330 16:34:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:00.330 16:34:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:00.331 16:34:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:00.331 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.331 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.587 nvme0n1 00:23:00.587 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.587 16:34:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.587 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.587 16:34:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:00.587 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.587 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.587 16:34:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.587 16:34:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.587 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.587 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.587 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.587 16:34:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:00.587 16:34:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:00.587 16:34:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:00.587 16:34:09 -- host/auth.sh@44 -- # digest=sha384 00:23:00.587 16:34:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.587 16:34:09 -- host/auth.sh@44 -- # keyid=2 00:23:00.587 16:34:09 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:00.587 16:34:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:00.587 16:34:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:00.587 16:34:09 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:00.587 16:34:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:23:00.587 16:34:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:00.587 16:34:09 -- host/auth.sh@68 -- # digest=sha384 00:23:00.587 16:34:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:00.587 16:34:09 -- host/auth.sh@68 -- # keyid=2 00:23:00.587 16:34:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:00.587 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.587 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.587 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.587 16:34:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:00.587 16:34:09 -- nvmf/common.sh@717 -- # local ip 00:23:00.588 16:34:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:00.588 16:34:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:00.588 16:34:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.588 16:34:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.588 16:34:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:00.588 16:34:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.588 16:34:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.588 16:34:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:00.588 16:34:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:00.588 16:34:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:00.588 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.588 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.588 nvme0n1 00:23:00.588 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.588 16:34:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.588 16:34:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:00.588 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.588 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.845 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.845 16:34:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.845 16:34:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.845 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.845 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.845 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.845 16:34:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:00.845 16:34:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:00.845 16:34:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:00.845 16:34:09 -- host/auth.sh@44 -- # digest=sha384 00:23:00.845 16:34:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.845 16:34:09 -- host/auth.sh@44 -- # keyid=3 00:23:00.845 16:34:09 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:00.845 16:34:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:00.845 16:34:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:00.845 16:34:09 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:00.845 16:34:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:23:00.845 16:34:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:00.845 16:34:09 -- host/auth.sh@68 -- # digest=sha384 00:23:00.845 16:34:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:00.845 16:34:09 -- host/auth.sh@68 -- # keyid=3 00:23:00.845 16:34:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:00.845 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.845 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.845 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.845 16:34:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:00.845 16:34:09 -- nvmf/common.sh@717 -- # local ip 00:23:00.845 16:34:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:00.845 16:34:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:00.845 16:34:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.845 16:34:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.845 16:34:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:00.845 16:34:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.845 16:34:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.845 16:34:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:00.845 16:34:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:00.845 16:34:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:00.845 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.845 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.845 nvme0n1 00:23:00.845 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.845 16:34:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.845 16:34:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:00.845 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.845 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:00.845 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:00.845 16:34:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.845 16:34:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.845 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:00.845 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:01.112 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.112 16:34:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:01.113 16:34:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:01.113 16:34:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:01.113 16:34:09 -- host/auth.sh@44 -- # digest=sha384 00:23:01.113 16:34:09 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:01.113 16:34:09 -- host/auth.sh@44 -- # keyid=4 00:23:01.113 16:34:09 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:01.113 16:34:09 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:01.113 16:34:09 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:01.113 16:34:09 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:01.113 16:34:09 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:23:01.113 16:34:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:01.113 16:34:09 -- host/auth.sh@68 -- # digest=sha384 00:23:01.113 16:34:09 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:01.113 16:34:09 -- host/auth.sh@68 -- # keyid=4 00:23:01.113 16:34:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:01.113 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.113 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:01.113 16:34:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.113 16:34:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:01.113 16:34:09 -- nvmf/common.sh@717 -- # local ip 00:23:01.113 16:34:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:01.113 16:34:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:01.113 16:34:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.113 16:34:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.113 16:34:09 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:01.113 16:34:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.113 16:34:09 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.113 16:34:09 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:01.113 16:34:09 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:01.113 16:34:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:01.113 16:34:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.113 16:34:09 -- common/autotest_common.sh@10 -- # set +x 00:23:01.113 nvme0n1 00:23:01.113 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.113 16:34:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.113 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.113 16:34:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:01.113 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.113 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.113 16:34:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.113 16:34:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.113 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.113 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.113 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.113 16:34:10 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.113 16:34:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:01.113 16:34:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:01.113 16:34:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:01.113 16:34:10 -- host/auth.sh@44 -- # digest=sha384 00:23:01.113 16:34:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.113 16:34:10 -- host/auth.sh@44 -- # keyid=0 00:23:01.113 16:34:10 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:01.113 16:34:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:01.113 16:34:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:01.113 16:34:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:01.113 16:34:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:23:01.113 16:34:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:01.113 16:34:10 -- host/auth.sh@68 -- # digest=sha384 00:23:01.113 16:34:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:01.113 16:34:10 -- host/auth.sh@68 -- # keyid=0 00:23:01.113 16:34:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:01.113 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.113 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.113 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.373 16:34:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:01.373 16:34:10 -- nvmf/common.sh@717 -- # local ip 00:23:01.373 16:34:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:01.373 16:34:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:01.373 16:34:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.373 16:34:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.373 16:34:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:01.373 16:34:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.373 16:34:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.374 16:34:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:01.374 16:34:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:01.374 16:34:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:01.374 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.374 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.374 nvme0n1 00:23:01.374 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.374 16:34:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.374 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.374 16:34:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:01.374 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.374 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.374 16:34:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.374 16:34:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.374 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.374 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.374 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.374 16:34:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:01.374 16:34:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:01.374 16:34:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:01.374 16:34:10 -- host/auth.sh@44 -- # digest=sha384 00:23:01.374 16:34:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.374 16:34:10 -- host/auth.sh@44 -- # keyid=1 00:23:01.374 16:34:10 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:01.374 16:34:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:01.374 16:34:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:01.374 16:34:10 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:01.374 16:34:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:23:01.374 16:34:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:01.374 16:34:10 -- host/auth.sh@68 -- # digest=sha384 00:23:01.374 16:34:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:01.374 16:34:10 -- host/auth.sh@68 -- # keyid=1 00:23:01.374 16:34:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:01.374 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.374 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.374 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.374 16:34:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:01.374 16:34:10 -- nvmf/common.sh@717 -- # local ip 00:23:01.374 16:34:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:01.374 16:34:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:01.374 16:34:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.374 16:34:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.374 16:34:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:01.374 16:34:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.374 16:34:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.374 16:34:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:01.374 16:34:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:01.374 16:34:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:01.374 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.374 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.629 nvme0n1 00:23:01.629 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.629 16:34:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.629 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.629 16:34:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:01.629 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.629 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.629 16:34:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.629 16:34:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.629 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.630 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.630 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.630 16:34:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:01.630 16:34:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:01.630 16:34:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:01.630 16:34:10 -- host/auth.sh@44 -- # digest=sha384 00:23:01.630 16:34:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.630 16:34:10 -- host/auth.sh@44 -- # keyid=2 00:23:01.630 16:34:10 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:01.630 16:34:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:01.630 16:34:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:01.630 16:34:10 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:01.630 16:34:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:23:01.630 16:34:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:01.630 16:34:10 -- host/auth.sh@68 -- # digest=sha384 00:23:01.630 16:34:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:01.630 16:34:10 -- host/auth.sh@68 -- # keyid=2 00:23:01.630 16:34:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:01.630 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.630 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.630 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.630 16:34:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:01.630 16:34:10 -- nvmf/common.sh@717 -- # local ip 00:23:01.630 16:34:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:01.630 16:34:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:01.630 16:34:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.630 16:34:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.630 16:34:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:01.630 16:34:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.630 16:34:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.630 16:34:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:01.630 16:34:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:01.630 16:34:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:01.630 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.630 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.887 nvme0n1 00:23:01.887 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.887 16:34:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.887 16:34:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:01.887 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.887 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.887 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.887 16:34:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.887 16:34:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.887 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.887 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.887 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.887 16:34:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:01.887 16:34:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:01.887 16:34:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:01.887 16:34:10 -- host/auth.sh@44 -- # digest=sha384 00:23:01.887 16:34:10 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.887 16:34:10 -- host/auth.sh@44 -- # keyid=3 00:23:01.887 16:34:10 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:01.887 16:34:10 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:01.887 16:34:10 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:01.887 16:34:10 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:01.887 16:34:10 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:23:01.887 16:34:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:01.887 16:34:10 -- host/auth.sh@68 -- # digest=sha384 00:23:01.888 16:34:10 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:01.888 16:34:10 -- host/auth.sh@68 -- # keyid=3 00:23:01.888 16:34:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:01.888 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.888 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:01.888 16:34:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:01.888 16:34:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:01.888 16:34:10 -- nvmf/common.sh@717 -- # local ip 00:23:01.888 16:34:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:01.888 16:34:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:01.888 16:34:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.888 16:34:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.888 16:34:10 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:01.888 16:34:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.888 16:34:10 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.888 16:34:10 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:01.888 16:34:10 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:01.888 16:34:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:01.888 16:34:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:01.888 16:34:10 -- common/autotest_common.sh@10 -- # set +x 00:23:02.145 nvme0n1 00:23:02.145 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.145 16:34:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.145 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.145 16:34:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:02.145 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.145 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.145 16:34:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.145 16:34:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.145 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.145 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.145 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.145 16:34:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:02.145 16:34:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:02.145 16:34:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:02.145 16:34:11 -- host/auth.sh@44 -- # digest=sha384 00:23:02.145 16:34:11 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:02.145 16:34:11 -- host/auth.sh@44 -- # keyid=4 00:23:02.145 16:34:11 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:02.145 16:34:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:02.145 16:34:11 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:02.145 16:34:11 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:02.145 16:34:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:23:02.145 16:34:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:02.145 16:34:11 -- host/auth.sh@68 -- # digest=sha384 00:23:02.145 16:34:11 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:02.145 16:34:11 -- host/auth.sh@68 -- # keyid=4 00:23:02.145 16:34:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:02.145 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.145 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.145 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.145 16:34:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:02.145 16:34:11 -- nvmf/common.sh@717 -- # local ip 00:23:02.145 16:34:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:02.145 16:34:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:02.145 16:34:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.145 16:34:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.145 16:34:11 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:02.145 16:34:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.145 16:34:11 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.145 16:34:11 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:02.145 16:34:11 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:02.145 16:34:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:02.145 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.145 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.401 nvme0n1 00:23:02.401 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.401 16:34:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.401 16:34:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:02.401 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.401 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.401 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.401 16:34:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.401 16:34:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.401 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.401 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.401 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.401 16:34:11 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.402 16:34:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:02.402 16:34:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:02.402 16:34:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:02.402 16:34:11 -- host/auth.sh@44 -- # digest=sha384 00:23:02.402 16:34:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:02.402 16:34:11 -- host/auth.sh@44 -- # keyid=0 00:23:02.402 16:34:11 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:02.402 16:34:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:02.402 16:34:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:02.402 16:34:11 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:02.402 16:34:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:23:02.402 16:34:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:02.402 16:34:11 -- host/auth.sh@68 -- # digest=sha384 00:23:02.402 16:34:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:02.402 16:34:11 -- host/auth.sh@68 -- # keyid=0 00:23:02.402 16:34:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:02.402 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.402 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.402 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.402 16:34:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:02.402 16:34:11 -- nvmf/common.sh@717 -- # local ip 00:23:02.402 16:34:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:02.402 16:34:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:02.402 16:34:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.402 16:34:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.402 16:34:11 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:02.402 16:34:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.402 16:34:11 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.402 16:34:11 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:02.402 16:34:11 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:02.402 16:34:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:02.402 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.402 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.659 nvme0n1 00:23:02.659 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.659 16:34:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.659 16:34:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:02.659 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.659 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.659 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.659 16:34:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.659 16:34:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.659 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.659 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.659 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.659 16:34:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:02.659 16:34:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:02.659 16:34:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:02.659 16:34:11 -- host/auth.sh@44 -- # digest=sha384 00:23:02.659 16:34:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:02.659 16:34:11 -- host/auth.sh@44 -- # keyid=1 00:23:02.659 16:34:11 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:02.659 16:34:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:02.659 16:34:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:02.659 16:34:11 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:02.659 16:34:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:23:02.659 16:34:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:02.659 16:34:11 -- host/auth.sh@68 -- # digest=sha384 00:23:02.659 16:34:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:02.659 16:34:11 -- host/auth.sh@68 -- # keyid=1 00:23:02.659 16:34:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:02.659 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.659 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.659 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.659 16:34:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:02.659 16:34:11 -- nvmf/common.sh@717 -- # local ip 00:23:02.659 16:34:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:02.659 16:34:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:02.659 16:34:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.659 16:34:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.659 16:34:11 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:02.659 16:34:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.659 16:34:11 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.659 16:34:11 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:02.659 16:34:11 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:02.659 16:34:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:02.659 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.659 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.917 nvme0n1 00:23:02.917 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.917 16:34:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.917 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.917 16:34:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:02.917 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.917 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.917 16:34:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.917 16:34:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.917 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.917 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.917 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.917 16:34:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:02.917 16:34:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:02.917 16:34:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:02.917 16:34:11 -- host/auth.sh@44 -- # digest=sha384 00:23:02.917 16:34:11 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:02.917 16:34:11 -- host/auth.sh@44 -- # keyid=2 00:23:02.917 16:34:11 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:02.917 16:34:11 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:02.917 16:34:11 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:02.917 16:34:11 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:02.917 16:34:11 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:23:02.917 16:34:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:02.917 16:34:11 -- host/auth.sh@68 -- # digest=sha384 00:23:02.917 16:34:11 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:02.917 16:34:11 -- host/auth.sh@68 -- # keyid=2 00:23:02.917 16:34:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:02.917 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:02.917 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:02.917 16:34:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:02.917 16:34:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:02.917 16:34:11 -- nvmf/common.sh@717 -- # local ip 00:23:02.917 16:34:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:02.917 16:34:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:02.917 16:34:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.917 16:34:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.917 16:34:11 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:02.917 16:34:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.917 16:34:11 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.917 16:34:11 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:02.917 16:34:11 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:03.175 16:34:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:03.175 16:34:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.175 16:34:11 -- common/autotest_common.sh@10 -- # set +x 00:23:03.175 nvme0n1 00:23:03.175 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.175 16:34:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.175 16:34:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:03.175 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.175 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.175 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.175 16:34:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.175 16:34:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.175 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.175 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.433 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.433 16:34:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:03.433 16:34:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:03.433 16:34:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:03.433 16:34:12 -- host/auth.sh@44 -- # digest=sha384 00:23:03.433 16:34:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:03.433 16:34:12 -- host/auth.sh@44 -- # keyid=3 00:23:03.433 16:34:12 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:03.433 16:34:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:03.433 16:34:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:03.433 16:34:12 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:03.433 16:34:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:23:03.433 16:34:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:03.433 16:34:12 -- host/auth.sh@68 -- # digest=sha384 00:23:03.433 16:34:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:03.433 16:34:12 -- host/auth.sh@68 -- # keyid=3 00:23:03.433 16:34:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:03.433 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.433 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.433 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.433 16:34:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:03.433 16:34:12 -- nvmf/common.sh@717 -- # local ip 00:23:03.433 16:34:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:03.433 16:34:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:03.433 16:34:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.433 16:34:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.433 16:34:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:03.433 16:34:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.433 16:34:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.433 16:34:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:03.433 16:34:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:03.433 16:34:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:03.433 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.433 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.433 nvme0n1 00:23:03.433 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.433 16:34:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.433 16:34:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:03.433 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.433 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.433 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.691 16:34:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.691 16:34:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.691 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.691 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.691 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.691 16:34:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:03.691 16:34:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:03.691 16:34:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:03.691 16:34:12 -- host/auth.sh@44 -- # digest=sha384 00:23:03.691 16:34:12 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:03.691 16:34:12 -- host/auth.sh@44 -- # keyid=4 00:23:03.691 16:34:12 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:03.691 16:34:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:03.691 16:34:12 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:03.691 16:34:12 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:03.691 16:34:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:23:03.691 16:34:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:03.691 16:34:12 -- host/auth.sh@68 -- # digest=sha384 00:23:03.691 16:34:12 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:03.691 16:34:12 -- host/auth.sh@68 -- # keyid=4 00:23:03.691 16:34:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:03.691 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.691 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.691 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.691 16:34:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:03.691 16:34:12 -- nvmf/common.sh@717 -- # local ip 00:23:03.691 16:34:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:03.691 16:34:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:03.691 16:34:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.691 16:34:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.691 16:34:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:03.691 16:34:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.691 16:34:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.691 16:34:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:03.691 16:34:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:03.691 16:34:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:03.691 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.691 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.948 nvme0n1 00:23:03.948 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.948 16:34:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.948 16:34:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:03.948 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.948 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.948 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.948 16:34:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.948 16:34:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.948 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.948 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.948 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.948 16:34:12 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:03.948 16:34:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:03.948 16:34:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:03.948 16:34:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:03.948 16:34:12 -- host/auth.sh@44 -- # digest=sha384 00:23:03.948 16:34:12 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:03.948 16:34:12 -- host/auth.sh@44 -- # keyid=0 00:23:03.948 16:34:12 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:03.948 16:34:12 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:03.948 16:34:12 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:03.948 16:34:12 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:03.948 16:34:12 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:23:03.948 16:34:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:03.948 16:34:12 -- host/auth.sh@68 -- # digest=sha384 00:23:03.948 16:34:12 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:03.948 16:34:12 -- host/auth.sh@68 -- # keyid=0 00:23:03.948 16:34:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:03.948 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.948 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:03.948 16:34:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:03.948 16:34:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:03.948 16:34:12 -- nvmf/common.sh@717 -- # local ip 00:23:03.948 16:34:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:03.948 16:34:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:03.948 16:34:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.948 16:34:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.948 16:34:12 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:03.948 16:34:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.948 16:34:12 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.948 16:34:12 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:03.948 16:34:12 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:03.948 16:34:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:03.948 16:34:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:03.948 16:34:12 -- common/autotest_common.sh@10 -- # set +x 00:23:04.206 nvme0n1 00:23:04.206 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.206 16:34:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.206 16:34:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:04.206 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.206 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.206 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.206 16:34:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.206 16:34:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.206 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.206 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.206 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.206 16:34:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:04.206 16:34:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:04.206 16:34:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:04.206 16:34:13 -- host/auth.sh@44 -- # digest=sha384 00:23:04.206 16:34:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.206 16:34:13 -- host/auth.sh@44 -- # keyid=1 00:23:04.206 16:34:13 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:04.206 16:34:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:04.206 16:34:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:04.206 16:34:13 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:04.206 16:34:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:23:04.206 16:34:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:04.206 16:34:13 -- host/auth.sh@68 -- # digest=sha384 00:23:04.206 16:34:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:04.206 16:34:13 -- host/auth.sh@68 -- # keyid=1 00:23:04.206 16:34:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:04.206 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.206 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.206 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.206 16:34:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:04.206 16:34:13 -- nvmf/common.sh@717 -- # local ip 00:23:04.206 16:34:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:04.206 16:34:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:04.206 16:34:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.206 16:34:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.206 16:34:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:04.206 16:34:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.206 16:34:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.206 16:34:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:04.206 16:34:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:04.206 16:34:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:04.206 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.206 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.770 nvme0n1 00:23:04.770 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.770 16:34:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.770 16:34:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:04.770 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.770 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.770 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.770 16:34:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.770 16:34:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.770 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.770 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.770 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.770 16:34:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:04.770 16:34:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:04.770 16:34:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:04.770 16:34:13 -- host/auth.sh@44 -- # digest=sha384 00:23:04.770 16:34:13 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.770 16:34:13 -- host/auth.sh@44 -- # keyid=2 00:23:04.770 16:34:13 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:04.770 16:34:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:04.770 16:34:13 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:04.770 16:34:13 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:04.770 16:34:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:23:04.770 16:34:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:04.770 16:34:13 -- host/auth.sh@68 -- # digest=sha384 00:23:04.770 16:34:13 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:04.770 16:34:13 -- host/auth.sh@68 -- # keyid=2 00:23:04.770 16:34:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:04.770 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.770 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:04.770 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.770 16:34:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:04.770 16:34:13 -- nvmf/common.sh@717 -- # local ip 00:23:04.770 16:34:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:04.770 16:34:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:04.770 16:34:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.770 16:34:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.770 16:34:13 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:04.770 16:34:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.770 16:34:13 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.770 16:34:13 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:04.770 16:34:13 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:04.770 16:34:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:04.770 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.770 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:05.028 nvme0n1 00:23:05.028 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.028 16:34:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.028 16:34:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.028 16:34:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:05.028 16:34:13 -- common/autotest_common.sh@10 -- # set +x 00:23:05.028 16:34:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.028 16:34:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.028 16:34:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.028 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.028 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:05.028 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.028 16:34:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:05.028 16:34:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:05.028 16:34:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:05.028 16:34:14 -- host/auth.sh@44 -- # digest=sha384 00:23:05.028 16:34:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:05.028 16:34:14 -- host/auth.sh@44 -- # keyid=3 00:23:05.028 16:34:14 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:05.028 16:34:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:05.028 16:34:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:05.028 16:34:14 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:05.028 16:34:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:23:05.028 16:34:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:05.028 16:34:14 -- host/auth.sh@68 -- # digest=sha384 00:23:05.028 16:34:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:05.028 16:34:14 -- host/auth.sh@68 -- # keyid=3 00:23:05.028 16:34:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:05.028 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.028 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:05.028 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.028 16:34:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:05.028 16:34:14 -- nvmf/common.sh@717 -- # local ip 00:23:05.028 16:34:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:05.028 16:34:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:05.028 16:34:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.028 16:34:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.028 16:34:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:05.028 16:34:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.028 16:34:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.028 16:34:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:05.028 16:34:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:05.028 16:34:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:05.285 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.285 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:05.542 nvme0n1 00:23:05.542 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.542 16:34:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.542 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.542 16:34:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:05.542 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:05.542 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.542 16:34:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.542 16:34:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.542 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.542 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:05.542 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.542 16:34:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:05.542 16:34:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:05.542 16:34:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:05.542 16:34:14 -- host/auth.sh@44 -- # digest=sha384 00:23:05.542 16:34:14 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:05.542 16:34:14 -- host/auth.sh@44 -- # keyid=4 00:23:05.542 16:34:14 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:05.542 16:34:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:05.542 16:34:14 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:05.542 16:34:14 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:05.542 16:34:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:23:05.542 16:34:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:05.542 16:34:14 -- host/auth.sh@68 -- # digest=sha384 00:23:05.542 16:34:14 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:05.542 16:34:14 -- host/auth.sh@68 -- # keyid=4 00:23:05.542 16:34:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:05.542 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.542 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:05.542 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.542 16:34:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:05.542 16:34:14 -- nvmf/common.sh@717 -- # local ip 00:23:05.542 16:34:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:05.542 16:34:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:05.542 16:34:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.542 16:34:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.542 16:34:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:05.542 16:34:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.542 16:34:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.542 16:34:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:05.542 16:34:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:05.542 16:34:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:05.542 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.542 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:05.800 nvme0n1 00:23:05.800 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.800 16:34:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.800 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.800 16:34:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:05.800 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:05.800 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.058 16:34:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.058 16:34:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.058 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.058 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:06.058 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.058 16:34:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.058 16:34:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:06.058 16:34:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:06.058 16:34:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:06.058 16:34:14 -- host/auth.sh@44 -- # digest=sha384 00:23:06.058 16:34:14 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:06.058 16:34:14 -- host/auth.sh@44 -- # keyid=0 00:23:06.058 16:34:14 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:06.058 16:34:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:06.058 16:34:14 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:06.058 16:34:14 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:06.058 16:34:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:23:06.058 16:34:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:06.058 16:34:14 -- host/auth.sh@68 -- # digest=sha384 00:23:06.058 16:34:14 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:06.058 16:34:14 -- host/auth.sh@68 -- # keyid=0 00:23:06.058 16:34:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:06.058 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.058 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:06.058 16:34:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.058 16:34:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:06.058 16:34:14 -- nvmf/common.sh@717 -- # local ip 00:23:06.058 16:34:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:06.058 16:34:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:06.058 16:34:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.058 16:34:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.058 16:34:14 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:06.058 16:34:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.058 16:34:14 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.058 16:34:14 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:06.058 16:34:14 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:06.058 16:34:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:06.058 16:34:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.058 16:34:14 -- common/autotest_common.sh@10 -- # set +x 00:23:06.624 nvme0n1 00:23:06.624 16:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.624 16:34:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.624 16:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.624 16:34:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:06.624 16:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:06.624 16:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.624 16:34:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.624 16:34:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.624 16:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.624 16:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:06.624 16:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.624 16:34:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:06.624 16:34:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:06.624 16:34:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:06.624 16:34:15 -- host/auth.sh@44 -- # digest=sha384 00:23:06.624 16:34:15 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:06.624 16:34:15 -- host/auth.sh@44 -- # keyid=1 00:23:06.624 16:34:15 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:06.624 16:34:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:06.624 16:34:15 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:06.624 16:34:15 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:06.624 16:34:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:23:06.624 16:34:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:06.624 16:34:15 -- host/auth.sh@68 -- # digest=sha384 00:23:06.624 16:34:15 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:06.624 16:34:15 -- host/auth.sh@68 -- # keyid=1 00:23:06.624 16:34:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:06.624 16:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.624 16:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:06.624 16:34:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.624 16:34:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:06.624 16:34:15 -- nvmf/common.sh@717 -- # local ip 00:23:06.624 16:34:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:06.624 16:34:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:06.624 16:34:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.624 16:34:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.624 16:34:15 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:06.624 16:34:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.624 16:34:15 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.624 16:34:15 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:06.624 16:34:15 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:06.624 16:34:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:06.624 16:34:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.624 16:34:15 -- common/autotest_common.sh@10 -- # set +x 00:23:07.189 nvme0n1 00:23:07.189 16:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.189 16:34:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.189 16:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.189 16:34:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:07.189 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:07.189 16:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.189 16:34:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.189 16:34:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.189 16:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.189 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:07.189 16:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.189 16:34:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:07.189 16:34:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:07.189 16:34:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:07.189 16:34:16 -- host/auth.sh@44 -- # digest=sha384 00:23:07.189 16:34:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:07.189 16:34:16 -- host/auth.sh@44 -- # keyid=2 00:23:07.189 16:34:16 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:07.189 16:34:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:07.189 16:34:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:07.189 16:34:16 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:07.189 16:34:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:23:07.189 16:34:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:07.189 16:34:16 -- host/auth.sh@68 -- # digest=sha384 00:23:07.189 16:34:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:07.189 16:34:16 -- host/auth.sh@68 -- # keyid=2 00:23:07.189 16:34:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:07.189 16:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.189 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:07.189 16:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.189 16:34:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:07.189 16:34:16 -- nvmf/common.sh@717 -- # local ip 00:23:07.189 16:34:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:07.189 16:34:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:07.189 16:34:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.189 16:34:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.189 16:34:16 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:07.189 16:34:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.189 16:34:16 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.189 16:34:16 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:07.189 16:34:16 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:07.189 16:34:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:07.189 16:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.189 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:07.754 nvme0n1 00:23:07.754 16:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.754 16:34:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.754 16:34:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:07.754 16:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.754 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:07.754 16:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.754 16:34:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.754 16:34:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.754 16:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.754 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:07.754 16:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.754 16:34:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:07.754 16:34:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:07.754 16:34:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:07.754 16:34:16 -- host/auth.sh@44 -- # digest=sha384 00:23:07.754 16:34:16 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:07.754 16:34:16 -- host/auth.sh@44 -- # keyid=3 00:23:07.754 16:34:16 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:07.754 16:34:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:07.754 16:34:16 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:07.754 16:34:16 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:07.754 16:34:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:23:07.754 16:34:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:07.754 16:34:16 -- host/auth.sh@68 -- # digest=sha384 00:23:07.754 16:34:16 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:07.754 16:34:16 -- host/auth.sh@68 -- # keyid=3 00:23:07.754 16:34:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:07.754 16:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.754 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:07.754 16:34:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.754 16:34:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:07.754 16:34:16 -- nvmf/common.sh@717 -- # local ip 00:23:07.754 16:34:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:07.754 16:34:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:07.754 16:34:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.754 16:34:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.754 16:34:16 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:07.754 16:34:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.754 16:34:16 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.754 16:34:16 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:07.754 16:34:16 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:07.754 16:34:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:07.754 16:34:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.754 16:34:16 -- common/autotest_common.sh@10 -- # set +x 00:23:08.319 nvme0n1 00:23:08.319 16:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.319 16:34:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.319 16:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.319 16:34:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:08.319 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:23:08.319 16:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.319 16:34:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.319 16:34:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.319 16:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.319 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:23:08.319 16:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.319 16:34:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:08.319 16:34:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:08.319 16:34:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:08.319 16:34:17 -- host/auth.sh@44 -- # digest=sha384 00:23:08.319 16:34:17 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:08.319 16:34:17 -- host/auth.sh@44 -- # keyid=4 00:23:08.319 16:34:17 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:08.319 16:34:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:08.319 16:34:17 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:08.320 16:34:17 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:08.320 16:34:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:23:08.320 16:34:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:08.320 16:34:17 -- host/auth.sh@68 -- # digest=sha384 00:23:08.320 16:34:17 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:08.320 16:34:17 -- host/auth.sh@68 -- # keyid=4 00:23:08.320 16:34:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:08.320 16:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.320 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:23:08.320 16:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.320 16:34:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:08.320 16:34:17 -- nvmf/common.sh@717 -- # local ip 00:23:08.320 16:34:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:08.320 16:34:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:08.320 16:34:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.320 16:34:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.320 16:34:17 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:08.320 16:34:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:08.320 16:34:17 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:08.320 16:34:17 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:08.320 16:34:17 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:08.320 16:34:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:08.320 16:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.320 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:23:08.886 nvme0n1 00:23:08.886 16:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.886 16:34:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.886 16:34:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:08.886 16:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.886 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:23:08.886 16:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.886 16:34:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.886 16:34:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.886 16:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.886 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:23:08.886 16:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.886 16:34:17 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:08.886 16:34:17 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.886 16:34:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:08.886 16:34:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:08.886 16:34:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:08.886 16:34:17 -- host/auth.sh@44 -- # digest=sha512 00:23:08.886 16:34:17 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:08.886 16:34:17 -- host/auth.sh@44 -- # keyid=0 00:23:08.886 16:34:17 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:08.886 16:34:17 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:08.886 16:34:17 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:08.886 16:34:17 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:08.886 16:34:17 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:23:08.886 16:34:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:08.886 16:34:17 -- host/auth.sh@68 -- # digest=sha512 00:23:08.886 16:34:17 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:08.886 16:34:17 -- host/auth.sh@68 -- # keyid=0 00:23:08.886 16:34:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:08.886 16:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.886 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:23:08.886 16:34:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.886 16:34:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:08.886 16:34:17 -- nvmf/common.sh@717 -- # local ip 00:23:08.886 16:34:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:08.886 16:34:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:08.886 16:34:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.886 16:34:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.886 16:34:17 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:08.886 16:34:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:08.886 16:34:17 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:08.886 16:34:17 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:08.886 16:34:17 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:08.886 16:34:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:08.886 16:34:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.886 16:34:17 -- common/autotest_common.sh@10 -- # set +x 00:23:09.144 nvme0n1 00:23:09.144 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.144 16:34:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.144 16:34:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:09.144 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.144 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.144 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.144 16:34:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.144 16:34:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.144 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.144 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.144 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.144 16:34:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:09.144 16:34:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:09.144 16:34:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:09.144 16:34:18 -- host/auth.sh@44 -- # digest=sha512 00:23:09.144 16:34:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.144 16:34:18 -- host/auth.sh@44 -- # keyid=1 00:23:09.144 16:34:18 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:09.144 16:34:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:09.144 16:34:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:09.144 16:34:18 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:09.144 16:34:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:23:09.144 16:34:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:09.144 16:34:18 -- host/auth.sh@68 -- # digest=sha512 00:23:09.144 16:34:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:09.144 16:34:18 -- host/auth.sh@68 -- # keyid=1 00:23:09.144 16:34:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.144 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.144 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.144 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.144 16:34:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:09.144 16:34:18 -- nvmf/common.sh@717 -- # local ip 00:23:09.144 16:34:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:09.144 16:34:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:09.144 16:34:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.144 16:34:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.144 16:34:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:09.144 16:34:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.144 16:34:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.144 16:34:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:09.144 16:34:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:09.145 16:34:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:09.145 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.145 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.402 nvme0n1 00:23:09.402 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.402 16:34:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.402 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.402 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.402 16:34:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:09.402 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.402 16:34:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.402 16:34:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.402 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.402 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.402 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.402 16:34:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:09.402 16:34:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:09.402 16:34:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:09.402 16:34:18 -- host/auth.sh@44 -- # digest=sha512 00:23:09.402 16:34:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.402 16:34:18 -- host/auth.sh@44 -- # keyid=2 00:23:09.402 16:34:18 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:09.402 16:34:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:09.402 16:34:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:09.402 16:34:18 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:09.402 16:34:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:23:09.402 16:34:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:09.402 16:34:18 -- host/auth.sh@68 -- # digest=sha512 00:23:09.402 16:34:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:09.402 16:34:18 -- host/auth.sh@68 -- # keyid=2 00:23:09.402 16:34:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.402 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.402 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.402 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.402 16:34:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:09.402 16:34:18 -- nvmf/common.sh@717 -- # local ip 00:23:09.402 16:34:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:09.402 16:34:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:09.402 16:34:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.402 16:34:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.402 16:34:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:09.402 16:34:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.402 16:34:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.402 16:34:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:09.402 16:34:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:09.402 16:34:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:09.402 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.402 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.660 nvme0n1 00:23:09.660 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.660 16:34:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.660 16:34:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:09.660 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.660 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.660 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.660 16:34:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.660 16:34:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.660 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.660 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.660 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.660 16:34:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:09.660 16:34:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:09.660 16:34:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:09.660 16:34:18 -- host/auth.sh@44 -- # digest=sha512 00:23:09.660 16:34:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.660 16:34:18 -- host/auth.sh@44 -- # keyid=3 00:23:09.660 16:34:18 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:09.660 16:34:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:09.660 16:34:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:09.660 16:34:18 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:09.660 16:34:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:23:09.660 16:34:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:09.660 16:34:18 -- host/auth.sh@68 -- # digest=sha512 00:23:09.660 16:34:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:09.660 16:34:18 -- host/auth.sh@68 -- # keyid=3 00:23:09.660 16:34:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.660 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.660 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.660 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.660 16:34:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:09.660 16:34:18 -- nvmf/common.sh@717 -- # local ip 00:23:09.660 16:34:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:09.660 16:34:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:09.660 16:34:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.660 16:34:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.660 16:34:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:09.660 16:34:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.660 16:34:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.660 16:34:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:09.660 16:34:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:09.660 16:34:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:09.660 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.660 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.918 nvme0n1 00:23:09.918 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.918 16:34:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.918 16:34:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:09.918 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.918 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.918 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.918 16:34:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.918 16:34:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.918 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.918 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.918 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.918 16:34:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:09.918 16:34:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:09.918 16:34:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:09.918 16:34:18 -- host/auth.sh@44 -- # digest=sha512 00:23:09.918 16:34:18 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:09.918 16:34:18 -- host/auth.sh@44 -- # keyid=4 00:23:09.918 16:34:18 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:09.918 16:34:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:09.918 16:34:18 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:09.918 16:34:18 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:09.918 16:34:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:23:09.918 16:34:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:09.918 16:34:18 -- host/auth.sh@68 -- # digest=sha512 00:23:09.918 16:34:18 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:09.918 16:34:18 -- host/auth.sh@68 -- # keyid=4 00:23:09.918 16:34:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:09.918 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.918 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.918 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.918 16:34:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:09.918 16:34:18 -- nvmf/common.sh@717 -- # local ip 00:23:09.918 16:34:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:09.918 16:34:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:09.918 16:34:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.918 16:34:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.918 16:34:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:09.918 16:34:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.918 16:34:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.918 16:34:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:09.918 16:34:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:09.918 16:34:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.918 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.918 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.918 nvme0n1 00:23:09.918 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.918 16:34:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.918 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.918 16:34:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:09.918 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:09.918 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.176 16:34:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.176 16:34:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.176 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.176 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:10.176 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.176 16:34:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.176 16:34:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:10.176 16:34:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:10.176 16:34:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:10.176 16:34:18 -- host/auth.sh@44 -- # digest=sha512 00:23:10.176 16:34:18 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:10.176 16:34:18 -- host/auth.sh@44 -- # keyid=0 00:23:10.176 16:34:18 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:10.176 16:34:18 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:10.176 16:34:18 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:10.176 16:34:18 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:10.176 16:34:18 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:23:10.176 16:34:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:10.176 16:34:18 -- host/auth.sh@68 -- # digest=sha512 00:23:10.176 16:34:18 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:10.176 16:34:18 -- host/auth.sh@68 -- # keyid=0 00:23:10.176 16:34:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:10.176 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.176 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:10.176 16:34:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.176 16:34:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:10.176 16:34:18 -- nvmf/common.sh@717 -- # local ip 00:23:10.176 16:34:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:10.176 16:34:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:10.176 16:34:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.176 16:34:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.176 16:34:18 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:10.177 16:34:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.177 16:34:18 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.177 16:34:18 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:10.177 16:34:18 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:10.177 16:34:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:10.177 16:34:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.177 16:34:18 -- common/autotest_common.sh@10 -- # set +x 00:23:10.177 nvme0n1 00:23:10.177 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.177 16:34:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.177 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.177 16:34:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:10.177 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.177 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.177 16:34:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.177 16:34:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.435 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.435 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.435 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.435 16:34:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:10.435 16:34:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:10.435 16:34:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:10.435 16:34:19 -- host/auth.sh@44 -- # digest=sha512 00:23:10.435 16:34:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:10.435 16:34:19 -- host/auth.sh@44 -- # keyid=1 00:23:10.435 16:34:19 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:10.435 16:34:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:10.435 16:34:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:10.435 16:34:19 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:10.435 16:34:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:23:10.435 16:34:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:10.435 16:34:19 -- host/auth.sh@68 -- # digest=sha512 00:23:10.435 16:34:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:10.435 16:34:19 -- host/auth.sh@68 -- # keyid=1 00:23:10.435 16:34:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:10.435 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.435 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.435 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.435 16:34:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:10.435 16:34:19 -- nvmf/common.sh@717 -- # local ip 00:23:10.435 16:34:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:10.435 16:34:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:10.435 16:34:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.435 16:34:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.435 16:34:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:10.435 16:34:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.435 16:34:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.435 16:34:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:10.435 16:34:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:10.435 16:34:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:10.435 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.435 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.435 nvme0n1 00:23:10.435 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.435 16:34:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.435 16:34:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:10.435 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.435 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.435 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.435 16:34:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.435 16:34:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.435 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.435 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.435 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.435 16:34:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:10.435 16:34:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:10.436 16:34:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:10.436 16:34:19 -- host/auth.sh@44 -- # digest=sha512 00:23:10.436 16:34:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:10.436 16:34:19 -- host/auth.sh@44 -- # keyid=2 00:23:10.436 16:34:19 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:10.436 16:34:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:10.436 16:34:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:10.436 16:34:19 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:10.436 16:34:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:23:10.436 16:34:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:10.436 16:34:19 -- host/auth.sh@68 -- # digest=sha512 00:23:10.436 16:34:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:10.436 16:34:19 -- host/auth.sh@68 -- # keyid=2 00:23:10.436 16:34:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:10.436 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.436 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.694 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.694 16:34:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:10.694 16:34:19 -- nvmf/common.sh@717 -- # local ip 00:23:10.694 16:34:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:10.694 16:34:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:10.694 16:34:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.694 16:34:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.694 16:34:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:10.694 16:34:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.694 16:34:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.694 16:34:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:10.694 16:34:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:10.694 16:34:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:10.694 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.694 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.694 nvme0n1 00:23:10.694 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.694 16:34:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.694 16:34:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:10.694 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.694 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.694 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.694 16:34:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.694 16:34:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.694 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.694 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.694 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.694 16:34:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:10.694 16:34:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:10.694 16:34:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:10.694 16:34:19 -- host/auth.sh@44 -- # digest=sha512 00:23:10.694 16:34:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:10.694 16:34:19 -- host/auth.sh@44 -- # keyid=3 00:23:10.694 16:34:19 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:10.694 16:34:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:10.694 16:34:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:10.694 16:34:19 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:10.694 16:34:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:23:10.694 16:34:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:10.694 16:34:19 -- host/auth.sh@68 -- # digest=sha512 00:23:10.694 16:34:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:10.694 16:34:19 -- host/auth.sh@68 -- # keyid=3 00:23:10.694 16:34:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:10.694 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.694 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.694 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.694 16:34:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:10.694 16:34:19 -- nvmf/common.sh@717 -- # local ip 00:23:10.694 16:34:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:10.694 16:34:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:10.694 16:34:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.694 16:34:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.694 16:34:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:10.694 16:34:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.694 16:34:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.694 16:34:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:10.694 16:34:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:10.694 16:34:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:10.694 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.694 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.953 nvme0n1 00:23:10.953 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.953 16:34:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:10.953 16:34:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.953 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.953 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.953 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.953 16:34:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.953 16:34:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.953 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.953 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.953 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.953 16:34:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:10.953 16:34:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:10.953 16:34:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:10.953 16:34:19 -- host/auth.sh@44 -- # digest=sha512 00:23:10.953 16:34:19 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:10.953 16:34:19 -- host/auth.sh@44 -- # keyid=4 00:23:10.953 16:34:19 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:10.953 16:34:19 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:10.953 16:34:19 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:10.953 16:34:19 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:10.953 16:34:19 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:23:10.953 16:34:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:10.953 16:34:19 -- host/auth.sh@68 -- # digest=sha512 00:23:10.953 16:34:19 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:10.953 16:34:19 -- host/auth.sh@68 -- # keyid=4 00:23:10.953 16:34:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:10.953 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.953 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:10.953 16:34:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.953 16:34:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:10.953 16:34:19 -- nvmf/common.sh@717 -- # local ip 00:23:10.953 16:34:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:10.953 16:34:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:10.953 16:34:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.953 16:34:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.953 16:34:19 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:10.953 16:34:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.953 16:34:19 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.953 16:34:19 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:10.953 16:34:19 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:10.953 16:34:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:10.953 16:34:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.953 16:34:19 -- common/autotest_common.sh@10 -- # set +x 00:23:11.211 nvme0n1 00:23:11.211 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.211 16:34:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.211 16:34:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:11.211 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.211 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.211 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.211 16:34:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.211 16:34:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.211 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.211 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.211 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.211 16:34:20 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.211 16:34:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:11.211 16:34:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:11.211 16:34:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:11.211 16:34:20 -- host/auth.sh@44 -- # digest=sha512 00:23:11.211 16:34:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:11.211 16:34:20 -- host/auth.sh@44 -- # keyid=0 00:23:11.211 16:34:20 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:11.211 16:34:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:11.211 16:34:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:11.211 16:34:20 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:11.211 16:34:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:23:11.211 16:34:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:11.211 16:34:20 -- host/auth.sh@68 -- # digest=sha512 00:23:11.211 16:34:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:11.211 16:34:20 -- host/auth.sh@68 -- # keyid=0 00:23:11.211 16:34:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.211 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.211 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.211 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.211 16:34:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:11.211 16:34:20 -- nvmf/common.sh@717 -- # local ip 00:23:11.211 16:34:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:11.211 16:34:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:11.211 16:34:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.211 16:34:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.211 16:34:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:11.211 16:34:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.211 16:34:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.211 16:34:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:11.211 16:34:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:11.211 16:34:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:11.211 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.211 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.469 nvme0n1 00:23:11.469 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.469 16:34:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.469 16:34:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:11.469 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.469 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.469 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.469 16:34:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.469 16:34:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.469 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.469 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.469 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.469 16:34:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:11.469 16:34:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:11.469 16:34:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:11.469 16:34:20 -- host/auth.sh@44 -- # digest=sha512 00:23:11.469 16:34:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:11.469 16:34:20 -- host/auth.sh@44 -- # keyid=1 00:23:11.469 16:34:20 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:11.469 16:34:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:11.469 16:34:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:11.469 16:34:20 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:11.470 16:34:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:23:11.470 16:34:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:11.470 16:34:20 -- host/auth.sh@68 -- # digest=sha512 00:23:11.470 16:34:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:11.470 16:34:20 -- host/auth.sh@68 -- # keyid=1 00:23:11.470 16:34:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.470 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.470 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.470 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.470 16:34:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:11.470 16:34:20 -- nvmf/common.sh@717 -- # local ip 00:23:11.470 16:34:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:11.470 16:34:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:11.470 16:34:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.470 16:34:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.470 16:34:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:11.470 16:34:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.470 16:34:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.470 16:34:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:11.470 16:34:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:11.470 16:34:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:11.470 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.470 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.727 nvme0n1 00:23:11.727 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.727 16:34:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.727 16:34:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:11.727 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.727 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.727 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.727 16:34:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.727 16:34:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.727 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.727 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.727 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.728 16:34:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:11.728 16:34:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:11.728 16:34:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:11.728 16:34:20 -- host/auth.sh@44 -- # digest=sha512 00:23:11.728 16:34:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:11.728 16:34:20 -- host/auth.sh@44 -- # keyid=2 00:23:11.728 16:34:20 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:11.728 16:34:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:11.728 16:34:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:11.728 16:34:20 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:11.728 16:34:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:23:11.728 16:34:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:11.728 16:34:20 -- host/auth.sh@68 -- # digest=sha512 00:23:11.728 16:34:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:11.728 16:34:20 -- host/auth.sh@68 -- # keyid=2 00:23:11.728 16:34:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.728 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.728 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.728 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.728 16:34:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:11.728 16:34:20 -- nvmf/common.sh@717 -- # local ip 00:23:11.728 16:34:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:11.728 16:34:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:11.728 16:34:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.728 16:34:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.728 16:34:20 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:11.728 16:34:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.728 16:34:20 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.728 16:34:20 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:11.728 16:34:20 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:11.728 16:34:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:11.728 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.728 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.985 nvme0n1 00:23:11.985 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.985 16:34:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.985 16:34:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:11.985 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.985 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.985 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.985 16:34:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.985 16:34:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.985 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.985 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.985 16:34:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.985 16:34:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:11.985 16:34:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:11.985 16:34:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:11.985 16:34:20 -- host/auth.sh@44 -- # digest=sha512 00:23:11.985 16:34:20 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:11.985 16:34:20 -- host/auth.sh@44 -- # keyid=3 00:23:11.985 16:34:20 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:11.985 16:34:20 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:11.985 16:34:20 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:11.985 16:34:20 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:11.985 16:34:20 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:23:11.985 16:34:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:11.985 16:34:20 -- host/auth.sh@68 -- # digest=sha512 00:23:11.985 16:34:20 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:11.985 16:34:20 -- host/auth.sh@68 -- # keyid=3 00:23:11.985 16:34:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:11.985 16:34:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.985 16:34:20 -- common/autotest_common.sh@10 -- # set +x 00:23:11.985 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.985 16:34:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:11.985 16:34:21 -- nvmf/common.sh@717 -- # local ip 00:23:11.985 16:34:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:11.985 16:34:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:11.985 16:34:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.985 16:34:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.985 16:34:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:11.985 16:34:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.985 16:34:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.985 16:34:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:11.985 16:34:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:11.985 16:34:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:11.985 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.985 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:12.243 nvme0n1 00:23:12.243 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.243 16:34:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.243 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.243 16:34:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:12.243 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:12.243 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.243 16:34:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.243 16:34:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.243 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.243 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:12.501 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.501 16:34:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:12.501 16:34:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:12.501 16:34:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:12.501 16:34:21 -- host/auth.sh@44 -- # digest=sha512 00:23:12.501 16:34:21 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:12.501 16:34:21 -- host/auth.sh@44 -- # keyid=4 00:23:12.501 16:34:21 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:12.501 16:34:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:12.501 16:34:21 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:12.501 16:34:21 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:12.501 16:34:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:23:12.501 16:34:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:12.501 16:34:21 -- host/auth.sh@68 -- # digest=sha512 00:23:12.501 16:34:21 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:12.501 16:34:21 -- host/auth.sh@68 -- # keyid=4 00:23:12.501 16:34:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:12.501 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.501 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:12.501 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.501 16:34:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:12.501 16:34:21 -- nvmf/common.sh@717 -- # local ip 00:23:12.501 16:34:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:12.501 16:34:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:12.501 16:34:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.501 16:34:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.501 16:34:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:12.501 16:34:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.501 16:34:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.501 16:34:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:12.501 16:34:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:12.501 16:34:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:12.501 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.501 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:12.501 nvme0n1 00:23:12.501 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.501 16:34:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.501 16:34:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:12.501 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.501 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:12.501 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.759 16:34:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.759 16:34:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.759 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.759 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:12.759 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.759 16:34:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.759 16:34:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:12.759 16:34:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:12.759 16:34:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:12.759 16:34:21 -- host/auth.sh@44 -- # digest=sha512 00:23:12.759 16:34:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:12.759 16:34:21 -- host/auth.sh@44 -- # keyid=0 00:23:12.759 16:34:21 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:12.759 16:34:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:12.759 16:34:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:12.759 16:34:21 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:12.759 16:34:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:23:12.759 16:34:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:12.759 16:34:21 -- host/auth.sh@68 -- # digest=sha512 00:23:12.759 16:34:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:12.759 16:34:21 -- host/auth.sh@68 -- # keyid=0 00:23:12.759 16:34:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:12.759 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.759 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:12.759 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:12.759 16:34:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:12.759 16:34:21 -- nvmf/common.sh@717 -- # local ip 00:23:12.759 16:34:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:12.759 16:34:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:12.759 16:34:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.759 16:34:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.759 16:34:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:12.760 16:34:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.760 16:34:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.760 16:34:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:12.760 16:34:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:12.760 16:34:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:12.760 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.760 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:13.018 nvme0n1 00:23:13.018 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.018 16:34:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.018 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.018 16:34:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:13.018 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:13.018 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.018 16:34:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.018 16:34:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.018 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.018 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:13.018 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.018 16:34:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:13.018 16:34:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:13.018 16:34:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:13.018 16:34:21 -- host/auth.sh@44 -- # digest=sha512 00:23:13.018 16:34:21 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:13.018 16:34:21 -- host/auth.sh@44 -- # keyid=1 00:23:13.018 16:34:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:13.018 16:34:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:13.018 16:34:21 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:13.018 16:34:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:13.018 16:34:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:23:13.018 16:34:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:13.018 16:34:21 -- host/auth.sh@68 -- # digest=sha512 00:23:13.018 16:34:21 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:13.018 16:34:21 -- host/auth.sh@68 -- # keyid=1 00:23:13.018 16:34:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:13.018 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.018 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:13.018 16:34:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.018 16:34:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:13.018 16:34:21 -- nvmf/common.sh@717 -- # local ip 00:23:13.018 16:34:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:13.018 16:34:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:13.018 16:34:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.018 16:34:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.018 16:34:21 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:13.018 16:34:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.018 16:34:21 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.018 16:34:21 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:13.018 16:34:21 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:13.018 16:34:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:13.018 16:34:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.018 16:34:21 -- common/autotest_common.sh@10 -- # set +x 00:23:13.583 nvme0n1 00:23:13.583 16:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.583 16:34:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.583 16:34:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:13.583 16:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.583 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:23:13.583 16:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.583 16:34:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.583 16:34:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.583 16:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.583 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:23:13.583 16:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.583 16:34:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:13.583 16:34:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:13.583 16:34:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:13.583 16:34:22 -- host/auth.sh@44 -- # digest=sha512 00:23:13.583 16:34:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:13.583 16:34:22 -- host/auth.sh@44 -- # keyid=2 00:23:13.583 16:34:22 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:13.583 16:34:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:13.583 16:34:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:13.583 16:34:22 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:13.583 16:34:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:23:13.583 16:34:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:13.583 16:34:22 -- host/auth.sh@68 -- # digest=sha512 00:23:13.583 16:34:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:13.583 16:34:22 -- host/auth.sh@68 -- # keyid=2 00:23:13.583 16:34:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:13.583 16:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.583 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:23:13.583 16:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.583 16:34:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:13.583 16:34:22 -- nvmf/common.sh@717 -- # local ip 00:23:13.583 16:34:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:13.583 16:34:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:13.583 16:34:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.583 16:34:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.583 16:34:22 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:13.583 16:34:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.583 16:34:22 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.583 16:34:22 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:13.583 16:34:22 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:13.583 16:34:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:13.583 16:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.583 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:23:13.840 nvme0n1 00:23:13.840 16:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.840 16:34:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.840 16:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.840 16:34:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:13.840 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:23:13.840 16:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.840 16:34:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.840 16:34:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.840 16:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.840 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:23:13.840 16:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.840 16:34:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:13.840 16:34:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:13.840 16:34:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:13.840 16:34:22 -- host/auth.sh@44 -- # digest=sha512 00:23:13.840 16:34:22 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:13.840 16:34:22 -- host/auth.sh@44 -- # keyid=3 00:23:13.840 16:34:22 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:13.840 16:34:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:13.840 16:34:22 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:13.840 16:34:22 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:13.840 16:34:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:23:13.840 16:34:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:13.840 16:34:22 -- host/auth.sh@68 -- # digest=sha512 00:23:13.840 16:34:22 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:13.840 16:34:22 -- host/auth.sh@68 -- # keyid=3 00:23:13.840 16:34:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:13.840 16:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.840 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:23:13.840 16:34:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:13.840 16:34:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:13.840 16:34:22 -- nvmf/common.sh@717 -- # local ip 00:23:13.840 16:34:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:13.840 16:34:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:13.840 16:34:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.840 16:34:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.840 16:34:22 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:13.840 16:34:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.840 16:34:22 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.840 16:34:22 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:13.840 16:34:22 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:13.840 16:34:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:13.840 16:34:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:13.840 16:34:22 -- common/autotest_common.sh@10 -- # set +x 00:23:14.097 nvme0n1 00:23:14.097 16:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.097 16:34:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.097 16:34:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:14.097 16:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.097 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:23:14.355 16:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.355 16:34:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.355 16:34:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.355 16:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.355 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:23:14.355 16:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.355 16:34:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:14.355 16:34:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:14.356 16:34:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:14.356 16:34:23 -- host/auth.sh@44 -- # digest=sha512 00:23:14.356 16:34:23 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:14.356 16:34:23 -- host/auth.sh@44 -- # keyid=4 00:23:14.356 16:34:23 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:14.356 16:34:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:14.356 16:34:23 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:14.356 16:34:23 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:14.356 16:34:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:23:14.356 16:34:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:14.356 16:34:23 -- host/auth.sh@68 -- # digest=sha512 00:23:14.356 16:34:23 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:14.356 16:34:23 -- host/auth.sh@68 -- # keyid=4 00:23:14.356 16:34:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:14.356 16:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.356 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:23:14.356 16:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.356 16:34:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:14.356 16:34:23 -- nvmf/common.sh@717 -- # local ip 00:23:14.356 16:34:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:14.356 16:34:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:14.356 16:34:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.356 16:34:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.356 16:34:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:14.356 16:34:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.356 16:34:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.356 16:34:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:14.356 16:34:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:14.356 16:34:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:14.356 16:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.356 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:23:14.614 nvme0n1 00:23:14.614 16:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.614 16:34:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.614 16:34:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:14.614 16:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.614 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:23:14.614 16:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.614 16:34:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.614 16:34:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.614 16:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.614 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:23:14.614 16:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.614 16:34:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.614 16:34:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:14.614 16:34:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:14.614 16:34:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:14.614 16:34:23 -- host/auth.sh@44 -- # digest=sha512 00:23:14.614 16:34:23 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:14.614 16:34:23 -- host/auth.sh@44 -- # keyid=0 00:23:14.614 16:34:23 -- host/auth.sh@45 -- # key=DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:14.614 16:34:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:14.614 16:34:23 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:14.614 16:34:23 -- host/auth.sh@49 -- # echo DHHC-1:00:NTVhMTVjZDNlNWFlZTM2NTY0ZjU3MjFmNjQ2YmI2ZjEV/MxR: 00:23:14.614 16:34:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:23:14.614 16:34:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:14.614 16:34:23 -- host/auth.sh@68 -- # digest=sha512 00:23:14.614 16:34:23 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:14.614 16:34:23 -- host/auth.sh@68 -- # keyid=0 00:23:14.614 16:34:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:14.614 16:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.614 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:23:14.614 16:34:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.614 16:34:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:14.614 16:34:23 -- nvmf/common.sh@717 -- # local ip 00:23:14.614 16:34:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:14.614 16:34:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:14.614 16:34:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.614 16:34:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.614 16:34:23 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:14.614 16:34:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.614 16:34:23 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.614 16:34:23 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:14.614 16:34:23 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:14.614 16:34:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:14.614 16:34:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.614 16:34:23 -- common/autotest_common.sh@10 -- # set +x 00:23:15.178 nvme0n1 00:23:15.178 16:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.178 16:34:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.178 16:34:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:15.178 16:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.178 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:23:15.178 16:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.178 16:34:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.178 16:34:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.178 16:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.178 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:23:15.178 16:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.178 16:34:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:15.178 16:34:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:15.178 16:34:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:15.178 16:34:24 -- host/auth.sh@44 -- # digest=sha512 00:23:15.178 16:34:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.178 16:34:24 -- host/auth.sh@44 -- # keyid=1 00:23:15.178 16:34:24 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:15.178 16:34:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:15.178 16:34:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:15.178 16:34:24 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:15.178 16:34:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:23:15.178 16:34:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:15.178 16:34:24 -- host/auth.sh@68 -- # digest=sha512 00:23:15.178 16:34:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:15.178 16:34:24 -- host/auth.sh@68 -- # keyid=1 00:23:15.178 16:34:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:15.178 16:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.178 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:23:15.178 16:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.178 16:34:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:15.178 16:34:24 -- nvmf/common.sh@717 -- # local ip 00:23:15.178 16:34:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:15.178 16:34:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:15.178 16:34:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.178 16:34:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.178 16:34:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:15.178 16:34:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.178 16:34:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.178 16:34:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:15.178 16:34:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:15.178 16:34:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:15.178 16:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.178 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 nvme0n1 00:23:15.742 16:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.742 16:34:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.742 16:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.742 16:34:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:15.742 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 16:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.742 16:34:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.742 16:34:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.742 16:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.742 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 16:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.742 16:34:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:15.742 16:34:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:15.742 16:34:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:15.742 16:34:24 -- host/auth.sh@44 -- # digest=sha512 00:23:15.742 16:34:24 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:15.742 16:34:24 -- host/auth.sh@44 -- # keyid=2 00:23:15.742 16:34:24 -- host/auth.sh@45 -- # key=DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:15.742 16:34:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:15.742 16:34:24 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:15.742 16:34:24 -- host/auth.sh@49 -- # echo DHHC-1:01:MzRkNjkyZjY0NDVmMTk1ZWVkYjQwNDA2ZThkZTJjYTQw9r9h: 00:23:15.742 16:34:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:23:15.742 16:34:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:15.742 16:34:24 -- host/auth.sh@68 -- # digest=sha512 00:23:15.742 16:34:24 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:15.742 16:34:24 -- host/auth.sh@68 -- # keyid=2 00:23:15.742 16:34:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:15.742 16:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.742 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:23:15.742 16:34:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.999 16:34:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:15.999 16:34:24 -- nvmf/common.sh@717 -- # local ip 00:23:15.999 16:34:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:15.999 16:34:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:15.999 16:34:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.999 16:34:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.999 16:34:24 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:15.999 16:34:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.999 16:34:24 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.999 16:34:24 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:15.999 16:34:24 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:15.999 16:34:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:15.999 16:34:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.999 16:34:24 -- common/autotest_common.sh@10 -- # set +x 00:23:16.561 nvme0n1 00:23:16.561 16:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.561 16:34:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.561 16:34:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:16.561 16:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.561 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:23:16.561 16:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.561 16:34:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.561 16:34:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.561 16:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.561 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:23:16.562 16:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.562 16:34:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:16.562 16:34:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:16.562 16:34:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:16.562 16:34:25 -- host/auth.sh@44 -- # digest=sha512 00:23:16.562 16:34:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.562 16:34:25 -- host/auth.sh@44 -- # keyid=3 00:23:16.562 16:34:25 -- host/auth.sh@45 -- # key=DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:16.562 16:34:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:16.562 16:34:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:16.562 16:34:25 -- host/auth.sh@49 -- # echo DHHC-1:02:Nzk2M2YwZjYyYjg0Zjk2MzM2MGI5MGQ4OTE5MjIxN2Y2OGYyYjIzMjIyZjg3OWVmxPGfTg==: 00:23:16.562 16:34:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:23:16.562 16:34:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:16.562 16:34:25 -- host/auth.sh@68 -- # digest=sha512 00:23:16.562 16:34:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:16.562 16:34:25 -- host/auth.sh@68 -- # keyid=3 00:23:16.562 16:34:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:16.562 16:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.562 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:23:16.562 16:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.562 16:34:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:16.562 16:34:25 -- nvmf/common.sh@717 -- # local ip 00:23:16.562 16:34:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:16.562 16:34:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:16.562 16:34:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.562 16:34:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.562 16:34:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:16.562 16:34:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.562 16:34:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.562 16:34:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:16.562 16:34:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:16.562 16:34:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:16.562 16:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.562 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 nvme0n1 00:23:17.125 16:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.125 16:34:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.125 16:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.125 16:34:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:17.125 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 16:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.125 16:34:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.125 16:34:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.125 16:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.125 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 16:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.125 16:34:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:17.125 16:34:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:17.125 16:34:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:17.125 16:34:25 -- host/auth.sh@44 -- # digest=sha512 00:23:17.125 16:34:25 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:17.125 16:34:25 -- host/auth.sh@44 -- # keyid=4 00:23:17.125 16:34:25 -- host/auth.sh@45 -- # key=DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:17.125 16:34:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:23:17.125 16:34:25 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:17.125 16:34:25 -- host/auth.sh@49 -- # echo DHHC-1:03:NjAwMDI5ODQwMzE5MWI3ZTRjZDU1OGI4MTM5ZGY4NTZlMDhhOTUyMGI5MzY1OGFhMTJhOTA1NTk4MWE2ZmRkZgqk8k0=: 00:23:17.125 16:34:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:23:17.125 16:34:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:17.125 16:34:25 -- host/auth.sh@68 -- # digest=sha512 00:23:17.125 16:34:25 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:17.125 16:34:25 -- host/auth.sh@68 -- # keyid=4 00:23:17.125 16:34:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:17.125 16:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.125 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:23:17.125 16:34:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.125 16:34:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:17.125 16:34:25 -- nvmf/common.sh@717 -- # local ip 00:23:17.125 16:34:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:17.125 16:34:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:17.125 16:34:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.125 16:34:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.125 16:34:25 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:17.125 16:34:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.125 16:34:25 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.125 16:34:25 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:17.125 16:34:25 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:17.125 16:34:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.125 16:34:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.125 16:34:25 -- common/autotest_common.sh@10 -- # set +x 00:23:17.691 nvme0n1 00:23:17.691 16:34:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.691 16:34:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.691 16:34:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:17.691 16:34:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.691 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:23:17.691 16:34:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.691 16:34:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.691 16:34:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.691 16:34:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.691 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:23:17.691 16:34:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.691 16:34:26 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:17.691 16:34:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:17.691 16:34:26 -- host/auth.sh@44 -- # digest=sha256 00:23:17.691 16:34:26 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:17.691 16:34:26 -- host/auth.sh@44 -- # keyid=1 00:23:17.691 16:34:26 -- host/auth.sh@45 -- # key=DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:17.691 16:34:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:17.691 16:34:26 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:17.691 16:34:26 -- host/auth.sh@49 -- # echo DHHC-1:00:MWM4OWNmNjMxZDhlNjU4YWU5NmNhNTJhMWJlYTgxODcwMDlkNjZmYTRkNGM4MTA0bu+4fw==: 00:23:17.691 16:34:26 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:17.691 16:34:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.691 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:23:17.691 16:34:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.691 16:34:26 -- host/auth.sh@119 -- # get_main_ns_ip 00:23:17.691 16:34:26 -- nvmf/common.sh@717 -- # local ip 00:23:17.691 16:34:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:17.691 16:34:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:17.691 16:34:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.691 16:34:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.691 16:34:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:17.691 16:34:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.691 16:34:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.691 16:34:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:17.691 16:34:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:17.691 16:34:26 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:17.691 16:34:26 -- common/autotest_common.sh@638 -- # local es=0 00:23:17.691 16:34:26 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:17.691 16:34:26 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:17.691 16:34:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:17.691 16:34:26 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:17.691 16:34:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:17.691 16:34:26 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:17.691 16:34:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.691 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:23:17.691 request: 00:23:17.691 { 00:23:17.691 "name": "nvme0", 00:23:17.691 "trtype": "rdma", 00:23:17.691 "traddr": "192.168.100.8", 00:23:17.691 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:17.691 "adrfam": "ipv4", 00:23:17.691 "trsvcid": "4420", 00:23:17.691 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:17.691 "method": "bdev_nvme_attach_controller", 00:23:17.691 "req_id": 1 00:23:17.691 } 00:23:17.691 Got JSON-RPC error response 00:23:17.691 response: 00:23:17.691 { 00:23:17.691 "code": -32602, 00:23:17.691 "message": "Invalid parameters" 00:23:17.691 } 00:23:17.691 16:34:26 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:17.691 16:34:26 -- common/autotest_common.sh@641 -- # es=1 00:23:17.691 16:34:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:17.691 16:34:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:17.691 16:34:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:17.691 16:34:26 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.691 16:34:26 -- host/auth.sh@121 -- # jq length 00:23:17.691 16:34:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.691 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:23:17.692 16:34:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.692 16:34:26 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:23:17.692 16:34:26 -- host/auth.sh@124 -- # get_main_ns_ip 00:23:17.692 16:34:26 -- nvmf/common.sh@717 -- # local ip 00:23:17.692 16:34:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:17.692 16:34:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:17.692 16:34:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.692 16:34:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.692 16:34:26 -- nvmf/common.sh@723 -- # [[ -z rdma ]] 00:23:17.692 16:34:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.692 16:34:26 -- nvmf/common.sh@724 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.692 16:34:26 -- nvmf/common.sh@726 -- # [[ -z 192.168.100.8 ]] 00:23:17.692 16:34:26 -- nvmf/common.sh@731 -- # echo 192.168.100.8 00:23:17.692 16:34:26 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:17.692 16:34:26 -- common/autotest_common.sh@638 -- # local es=0 00:23:17.692 16:34:26 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:17.692 16:34:26 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:17.692 16:34:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:17.692 16:34:26 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:17.692 16:34:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:17.692 16:34:26 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:17.692 16:34:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.692 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:23:17.950 request: 00:23:17.950 { 00:23:17.950 "name": "nvme0", 00:23:17.950 "trtype": "rdma", 00:23:17.950 "traddr": "192.168.100.8", 00:23:17.950 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:17.950 "adrfam": "ipv4", 00:23:17.950 "trsvcid": "4420", 00:23:17.950 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:17.950 "dhchap_key": "key2", 00:23:17.950 "method": "bdev_nvme_attach_controller", 00:23:17.950 "req_id": 1 00:23:17.950 } 00:23:17.950 Got JSON-RPC error response 00:23:17.950 response: 00:23:17.950 { 00:23:17.950 "code": -32602, 00:23:17.950 "message": "Invalid parameters" 00:23:17.950 } 00:23:17.950 16:34:26 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:17.950 16:34:26 -- common/autotest_common.sh@641 -- # es=1 00:23:17.950 16:34:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:17.950 16:34:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:17.950 16:34:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:17.950 16:34:26 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.950 16:34:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.950 16:34:26 -- host/auth.sh@127 -- # jq length 00:23:17.950 16:34:26 -- common/autotest_common.sh@10 -- # set +x 00:23:17.950 16:34:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.950 16:34:26 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:23:17.950 16:34:26 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:23:17.950 16:34:26 -- host/auth.sh@130 -- # cleanup 00:23:17.950 16:34:26 -- host/auth.sh@24 -- # nvmftestfini 00:23:17.950 16:34:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:17.950 16:34:26 -- nvmf/common.sh@117 -- # sync 00:23:17.950 16:34:26 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:17.950 16:34:26 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:17.950 16:34:26 -- nvmf/common.sh@120 -- # set +e 00:23:17.950 16:34:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.950 16:34:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:17.950 rmmod nvme_rdma 00:23:17.950 rmmod nvme_fabrics 00:23:17.950 16:34:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.950 16:34:26 -- nvmf/common.sh@124 -- # set -e 00:23:17.950 16:34:26 -- nvmf/common.sh@125 -- # return 0 00:23:17.950 16:34:26 -- nvmf/common.sh@478 -- # '[' -n 555560 ']' 00:23:17.950 16:34:26 -- nvmf/common.sh@479 -- # killprocess 555560 00:23:17.950 16:34:26 -- common/autotest_common.sh@936 -- # '[' -z 555560 ']' 00:23:17.950 16:34:26 -- common/autotest_common.sh@940 -- # kill -0 555560 00:23:17.950 16:34:26 -- common/autotest_common.sh@941 -- # uname 00:23:17.950 16:34:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.950 16:34:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 555560 00:23:17.950 16:34:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:17.950 16:34:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:17.950 16:34:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 555560' 00:23:17.950 killing process with pid 555560 00:23:17.950 16:34:26 -- common/autotest_common.sh@955 -- # kill 555560 00:23:17.950 16:34:26 -- common/autotest_common.sh@960 -- # wait 555560 00:23:18.208 16:34:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:18.208 16:34:27 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:23:18.208 16:34:27 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:18.208 16:34:27 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:18.208 16:34:27 -- host/auth.sh@27 -- # clean_kernel_target 00:23:18.208 16:34:27 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:18.208 16:34:27 -- nvmf/common.sh@675 -- # echo 0 00:23:18.208 16:34:27 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:18.208 16:34:27 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:18.208 16:34:27 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:18.208 16:34:27 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:18.208 16:34:27 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:18.208 16:34:27 -- nvmf/common.sh@684 -- # modprobe -r nvmet_rdma nvmet 00:23:18.208 16:34:27 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:21.507 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:23:21.507 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:23:21.507 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:21.507 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:22.881 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:23:23.140 16:34:31 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.L6n /tmp/spdk.key-null.CtS /tmp/spdk.key-sha256.Wjf /tmp/spdk.key-sha384.syj /tmp/spdk.key-sha512.0Ot /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:23:23.140 16:34:31 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:25.666 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:25.666 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:23:25.667 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:23:25.667 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:23:25.667 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:25.667 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:25.925 00:23:25.925 real 0m53.653s 00:23:25.925 user 0m43.635s 00:23:25.925 sys 0m14.382s 00:23:25.925 16:34:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:25.925 16:34:34 -- common/autotest_common.sh@10 -- # set +x 00:23:25.925 ************************************ 00:23:25.925 END TEST nvmf_auth 00:23:25.925 ************************************ 00:23:25.925 16:34:34 -- nvmf/nvmf.sh@104 -- # [[ rdma == \t\c\p ]] 00:23:25.925 16:34:34 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:23:25.925 16:34:34 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:23:25.925 16:34:34 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:23:25.925 16:34:34 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:25.925 16:34:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:25.926 16:34:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:25.926 16:34:34 -- common/autotest_common.sh@10 -- # set +x 00:23:26.185 ************************************ 00:23:26.185 START TEST nvmf_bdevperf 00:23:26.185 ************************************ 00:23:26.185 16:34:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:26.185 * Looking for test storage... 00:23:26.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:26.185 16:34:35 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.185 16:34:35 -- nvmf/common.sh@7 -- # uname -s 00:23:26.185 16:34:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.185 16:34:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.185 16:34:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.185 16:34:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.185 16:34:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.185 16:34:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.185 16:34:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.185 16:34:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.185 16:34:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.185 16:34:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.185 16:34:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:23:26.185 16:34:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:23:26.185 16:34:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.185 16:34:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.185 16:34:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.185 16:34:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.185 16:34:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:26.185 16:34:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.185 16:34:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.185 16:34:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.185 16:34:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.185 16:34:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.185 16:34:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.185 16:34:35 -- paths/export.sh@5 -- # export PATH 00:23:26.185 16:34:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.185 16:34:35 -- nvmf/common.sh@47 -- # : 0 00:23:26.186 16:34:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.186 16:34:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.186 16:34:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.186 16:34:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.186 16:34:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.186 16:34:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.186 16:34:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.186 16:34:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.186 16:34:35 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:26.186 16:34:35 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:26.186 16:34:35 -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:26.186 16:34:35 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:23:26.186 16:34:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.186 16:34:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:26.186 16:34:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:26.186 16:34:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:26.186 16:34:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.186 16:34:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.186 16:34:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.186 16:34:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:26.186 16:34:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:26.186 16:34:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.186 16:34:35 -- common/autotest_common.sh@10 -- # set +x 00:23:34.301 16:34:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:34.301 16:34:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:34.301 16:34:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:34.301 16:34:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:34.301 16:34:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:34.301 16:34:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:34.301 16:34:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:34.301 16:34:41 -- nvmf/common.sh@295 -- # net_devs=() 00:23:34.301 16:34:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:34.301 16:34:41 -- nvmf/common.sh@296 -- # e810=() 00:23:34.301 16:34:41 -- nvmf/common.sh@296 -- # local -ga e810 00:23:34.301 16:34:41 -- nvmf/common.sh@297 -- # x722=() 00:23:34.301 16:34:41 -- nvmf/common.sh@297 -- # local -ga x722 00:23:34.301 16:34:41 -- nvmf/common.sh@298 -- # mlx=() 00:23:34.301 16:34:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:34.301 16:34:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.301 16:34:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:34.301 16:34:41 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:34.301 16:34:41 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:34.301 16:34:41 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:34.301 16:34:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:34.301 16:34:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.301 16:34:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:23:34.301 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:23:34.301 16:34:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:34.301 16:34:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.301 16:34:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:23:34.301 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:23:34.301 16:34:41 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:34.301 16:34:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:34.301 16:34:41 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:34.301 16:34:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.301 16:34:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.301 16:34:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:34.301 16:34:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.302 16:34:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:34.302 Found net devices under 0000:18:00.0: mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.302 16:34:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.302 16:34:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:34.302 16:34:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.302 16:34:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:34.302 Found net devices under 0000:18:00.1: mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.302 16:34:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:34.302 16:34:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:34.302 16:34:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@409 -- # rdma_device_init 00:23:34.302 16:34:41 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:23:34.302 16:34:41 -- nvmf/common.sh@58 -- # uname 00:23:34.302 16:34:41 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:34.302 16:34:41 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:34.302 16:34:41 -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:34.302 16:34:41 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:34.302 16:34:41 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:34.302 16:34:41 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:34.302 16:34:41 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:34.302 16:34:41 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:34.302 16:34:41 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:23:34.302 16:34:41 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:34.302 16:34:41 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:34.302 16:34:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:34.302 16:34:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:34.302 16:34:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:34.302 16:34:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:34.302 16:34:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:34.302 16:34:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@105 -- # continue 2 00:23:34.302 16:34:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@105 -- # continue 2 00:23:34.302 16:34:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:34.302 16:34:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:34.302 16:34:41 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:34.302 16:34:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:34.302 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:34.302 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:23:34.302 altname enp24s0f0np0 00:23:34.302 altname ens785f0np0 00:23:34.302 inet 192.168.100.8/24 scope global mlx_0_0 00:23:34.302 valid_lft forever preferred_lft forever 00:23:34.302 16:34:41 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:34.302 16:34:41 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:34.302 16:34:41 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:34.302 16:34:41 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:34.302 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:34.302 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:23:34.302 altname enp24s0f1np1 00:23:34.302 altname ens785f1np1 00:23:34.302 inet 192.168.100.9/24 scope global mlx_0_1 00:23:34.302 valid_lft forever preferred_lft forever 00:23:34.302 16:34:41 -- nvmf/common.sh@411 -- # return 0 00:23:34.302 16:34:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:34.302 16:34:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:34.302 16:34:41 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:23:34.302 16:34:41 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:34.302 16:34:41 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:34.302 16:34:41 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:34.302 16:34:41 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:34.302 16:34:41 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:34.302 16:34:41 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:34.302 16:34:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@105 -- # continue 2 00:23:34.302 16:34:41 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:34.302 16:34:41 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:34.302 16:34:41 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@105 -- # continue 2 00:23:34.302 16:34:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:34.302 16:34:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:34.302 16:34:41 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:34.302 16:34:41 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:34.302 16:34:41 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:34.302 16:34:41 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:23:34.302 192.168.100.9' 00:23:34.302 16:34:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:34.302 192.168.100.9' 00:23:34.303 16:34:41 -- nvmf/common.sh@446 -- # head -n 1 00:23:34.303 16:34:42 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:34.303 16:34:42 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:23:34.303 192.168.100.9' 00:23:34.303 16:34:42 -- nvmf/common.sh@447 -- # tail -n +2 00:23:34.303 16:34:42 -- nvmf/common.sh@447 -- # head -n 1 00:23:34.303 16:34:42 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:34.303 16:34:42 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:23:34.303 16:34:42 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:34.303 16:34:42 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:23:34.303 16:34:42 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:23:34.303 16:34:42 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:23:34.303 16:34:42 -- host/bdevperf.sh@25 -- # tgt_init 00:23:34.303 16:34:42 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:34.303 16:34:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:34.303 16:34:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:34.303 16:34:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.303 16:34:42 -- nvmf/common.sh@470 -- # nvmfpid=567770 00:23:34.303 16:34:42 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:34.303 16:34:42 -- nvmf/common.sh@471 -- # waitforlisten 567770 00:23:34.303 16:34:42 -- common/autotest_common.sh@817 -- # '[' -z 567770 ']' 00:23:34.303 16:34:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.303 16:34:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:34.303 16:34:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.303 16:34:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:34.303 16:34:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.303 [2024-04-26 16:34:42.096798] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:23:34.303 [2024-04-26 16:34:42.096857] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.303 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.303 [2024-04-26 16:34:42.171659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:34.303 [2024-04-26 16:34:42.275226] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.303 [2024-04-26 16:34:42.275275] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.303 [2024-04-26 16:34:42.275289] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.303 [2024-04-26 16:34:42.275305] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.303 [2024-04-26 16:34:42.275314] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.303 [2024-04-26 16:34:42.275428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.303 [2024-04-26 16:34:42.275521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.303 [2024-04-26 16:34:42.275524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.303 16:34:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:34.303 16:34:42 -- common/autotest_common.sh@850 -- # return 0 00:23:34.303 16:34:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:34.303 16:34:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:34.303 16:34:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.303 16:34:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.303 16:34:42 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:34.303 16:34:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.303 16:34:42 -- common/autotest_common.sh@10 -- # set +x 00:23:34.303 [2024-04-26 16:34:42.997051] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeb6b30/0xebb020) succeed. 00:23:34.303 [2024-04-26 16:34:43.008452] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeb80d0/0xefc6b0) succeed. 00:23:34.303 16:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.303 16:34:43 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:34.303 16:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.303 16:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:34.303 Malloc0 00:23:34.303 16:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.303 16:34:43 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:34.303 16:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.303 16:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:34.303 16:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.303 16:34:43 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:34.303 16:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.303 16:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:34.303 16:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.303 16:34:43 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:34.303 16:34:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:34.303 16:34:43 -- common/autotest_common.sh@10 -- # set +x 00:23:34.303 [2024-04-26 16:34:43.162767] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:34.303 16:34:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.303 16:34:43 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:34.303 16:34:43 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:34.303 16:34:43 -- nvmf/common.sh@521 -- # config=() 00:23:34.303 16:34:43 -- nvmf/common.sh@521 -- # local subsystem config 00:23:34.303 16:34:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:34.303 16:34:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:34.303 { 00:23:34.303 "params": { 00:23:34.303 "name": "Nvme$subsystem", 00:23:34.303 "trtype": "$TEST_TRANSPORT", 00:23:34.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:34.303 "adrfam": "ipv4", 00:23:34.303 "trsvcid": "$NVMF_PORT", 00:23:34.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:34.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:34.303 "hdgst": ${hdgst:-false}, 00:23:34.303 "ddgst": ${ddgst:-false} 00:23:34.303 }, 00:23:34.303 "method": "bdev_nvme_attach_controller" 00:23:34.303 } 00:23:34.303 EOF 00:23:34.303 )") 00:23:34.303 16:34:43 -- nvmf/common.sh@543 -- # cat 00:23:34.303 16:34:43 -- nvmf/common.sh@545 -- # jq . 00:23:34.303 16:34:43 -- nvmf/common.sh@546 -- # IFS=, 00:23:34.303 16:34:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:34.303 "params": { 00:23:34.303 "name": "Nvme1", 00:23:34.303 "trtype": "rdma", 00:23:34.303 "traddr": "192.168.100.8", 00:23:34.303 "adrfam": "ipv4", 00:23:34.303 "trsvcid": "4420", 00:23:34.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:34.303 "hdgst": false, 00:23:34.303 "ddgst": false 00:23:34.303 }, 00:23:34.303 "method": "bdev_nvme_attach_controller" 00:23:34.303 }' 00:23:34.303 [2024-04-26 16:34:43.213379] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:23:34.303 [2024-04-26 16:34:43.213443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid567969 ] 00:23:34.303 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.303 [2024-04-26 16:34:43.286372] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.561 [2024-04-26 16:34:43.367134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.561 Running I/O for 1 seconds... 00:23:35.937 00:23:35.937 Latency(us) 00:23:35.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.937 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:35.937 Verification LBA range: start 0x0 length 0x4000 00:23:35.937 Nvme1n1 : 1.01 18054.31 70.52 0.00 0.00 7051.12 2564.45 12537.32 00:23:35.937 =================================================================================================================== 00:23:35.937 Total : 18054.31 70.52 0.00 0.00 7051.12 2564.45 12537.32 00:23:35.937 16:34:44 -- host/bdevperf.sh@30 -- # bdevperfpid=568161 00:23:35.937 16:34:44 -- host/bdevperf.sh@32 -- # sleep 3 00:23:35.937 16:34:44 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:35.937 16:34:44 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:35.937 16:34:44 -- nvmf/common.sh@521 -- # config=() 00:23:35.937 16:34:44 -- nvmf/common.sh@521 -- # local subsystem config 00:23:35.937 16:34:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:35.937 16:34:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:35.937 { 00:23:35.937 "params": { 00:23:35.937 "name": "Nvme$subsystem", 00:23:35.937 "trtype": "$TEST_TRANSPORT", 00:23:35.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.937 "adrfam": "ipv4", 00:23:35.937 "trsvcid": "$NVMF_PORT", 00:23:35.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.937 "hdgst": ${hdgst:-false}, 00:23:35.937 "ddgst": ${ddgst:-false} 00:23:35.937 }, 00:23:35.937 "method": "bdev_nvme_attach_controller" 00:23:35.937 } 00:23:35.937 EOF 00:23:35.937 )") 00:23:35.937 16:34:44 -- nvmf/common.sh@543 -- # cat 00:23:35.937 16:34:44 -- nvmf/common.sh@545 -- # jq . 00:23:35.937 16:34:44 -- nvmf/common.sh@546 -- # IFS=, 00:23:35.937 16:34:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:35.937 "params": { 00:23:35.937 "name": "Nvme1", 00:23:35.937 "trtype": "rdma", 00:23:35.937 "traddr": "192.168.100.8", 00:23:35.937 "adrfam": "ipv4", 00:23:35.937 "trsvcid": "4420", 00:23:35.937 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:35.937 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:35.937 "hdgst": false, 00:23:35.937 "ddgst": false 00:23:35.937 }, 00:23:35.937 "method": "bdev_nvme_attach_controller" 00:23:35.937 }' 00:23:35.937 [2024-04-26 16:34:44.837085] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:23:35.937 [2024-04-26 16:34:44.837147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid568161 ] 00:23:35.937 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.937 [2024-04-26 16:34:44.910403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.195 [2024-04-26 16:34:44.989591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.195 Running I/O for 15 seconds... 00:23:39.476 16:34:47 -- host/bdevperf.sh@33 -- # kill -9 567770 00:23:39.476 16:34:47 -- host/bdevperf.sh@35 -- # sleep 3 00:23:40.044 [2024-04-26 16:34:48.827590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:120072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:120080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.827988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.827997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:120304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.045 [2024-04-26 16:34:48.828364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.045 [2024-04-26 16:34:48.828390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.828989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:120616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.828998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.829008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.829017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.829027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.829037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.829047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.829056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.829066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.829075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.829086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.829094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.829105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.829116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.046 [2024-04-26 16:34:48.829126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.046 [2024-04-26 16:34:48.829135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:120680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:120728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:120800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:40.047 [2024-04-26 16:34:48.829510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:119808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:119856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:119880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:119904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:119920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:119936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x185e00 00:23:40.047 [2024-04-26 16:34:48.829868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.047 [2024-04-26 16:34:48.829878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.829887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.829897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.829906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.829917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.829926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.829937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.829946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.829956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.829965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.829976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:119992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.829985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.829995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.830004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.830015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.830024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.830037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.830047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.830058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.830067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.830077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.830086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.830097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.830106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.830116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x185e00 00:23:40.048 [2024-04-26 16:34:48.830125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32764 cdw0:3eff200 sqhd:89a0 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.831376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:40.048 [2024-04-26 16:34:48.831390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:40.048 [2024-04-26 16:34:48.831398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120056 len:8 PRP1 0x0 PRP2 0x0 00:23:40.048 [2024-04-26 16:34:48.831410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.048 [2024-04-26 16:34:48.831456] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a00 was disconnected and freed. reset controller. 00:23:40.048 [2024-04-26 16:34:48.834199] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:40.048 [2024-04-26 16:34:48.848040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:40.048 [2024-04-26 16:34:48.850369] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:40.048 [2024-04-26 16:34:48.850391] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:40.048 [2024-04-26 16:34:48.850399] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:23:41.008 [2024-04-26 16:34:49.853882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:41.008 [2024-04-26 16:34:49.853907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:41.008 [2024-04-26 16:34:49.854106] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:41.008 [2024-04-26 16:34:49.854119] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:41.008 [2024-04-26 16:34:49.854130] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:41.008 [2024-04-26 16:34:49.857195] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.008 [2024-04-26 16:34:49.863561] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:41.008 [2024-04-26 16:34:49.865706] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:41.008 [2024-04-26 16:34:49.865728] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:41.008 [2024-04-26 16:34:49.865738] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:23:41.942 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 567770 Killed "${NVMF_APP[@]}" "$@" 00:23:41.942 16:34:50 -- host/bdevperf.sh@36 -- # tgt_init 00:23:41.942 16:34:50 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:41.942 16:34:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:41.942 16:34:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:41.942 16:34:50 -- common/autotest_common.sh@10 -- # set +x 00:23:41.942 16:34:50 -- nvmf/common.sh@470 -- # nvmfpid=568897 00:23:41.942 16:34:50 -- nvmf/common.sh@471 -- # waitforlisten 568897 00:23:41.942 16:34:50 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:41.942 16:34:50 -- common/autotest_common.sh@817 -- # '[' -z 568897 ']' 00:23:41.942 16:34:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.942 16:34:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:41.942 16:34:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.942 16:34:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:41.942 16:34:50 -- common/autotest_common.sh@10 -- # set +x 00:23:41.942 [2024-04-26 16:34:50.857002] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:23:41.942 [2024-04-26 16:34:50.857068] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.942 [2024-04-26 16:34:50.869235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:41.942 [2024-04-26 16:34:50.869269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:41.942 [2024-04-26 16:34:50.869456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:41.942 [2024-04-26 16:34:50.869469] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:41.942 [2024-04-26 16:34:50.869481] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:41.942 [2024-04-26 16:34:50.872215] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:41.942 [2024-04-26 16:34:50.877243] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:41.942 [2024-04-26 16:34:50.879546] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:41.942 [2024-04-26 16:34:50.879568] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:41.942 [2024-04-26 16:34:50.879583] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:23:41.942 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.942 [2024-04-26 16:34:50.931017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:42.201 [2024-04-26 16:34:51.016455] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.201 [2024-04-26 16:34:51.016503] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.201 [2024-04-26 16:34:51.016513] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.201 [2024-04-26 16:34:51.016537] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.201 [2024-04-26 16:34:51.016549] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.201 [2024-04-26 16:34:51.016597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.201 [2024-04-26 16:34:51.016671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.201 [2024-04-26 16:34:51.016673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.766 16:34:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:42.766 16:34:51 -- common/autotest_common.sh@850 -- # return 0 00:23:42.766 16:34:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:42.766 16:34:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:42.766 16:34:51 -- common/autotest_common.sh@10 -- # set +x 00:23:42.766 16:34:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.766 16:34:51 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:42.766 16:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:42.766 16:34:51 -- common/autotest_common.sh@10 -- # set +x 00:23:42.766 [2024-04-26 16:34:51.748558] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1552b30/0x1557020) succeed. 00:23:42.766 [2024-04-26 16:34:51.758607] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15540d0/0x15986b0) succeed. 00:23:43.025 16:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.025 16:34:51 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:43.025 16:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.025 16:34:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.025 Malloc0 00:23:43.025 16:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.025 16:34:51 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.025 16:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.025 16:34:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.025 [2024-04-26 16:34:51.883175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.025 [2024-04-26 16:34:51.883205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:43.025 [2024-04-26 16:34:51.883388] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:43.025 [2024-04-26 16:34:51.883401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:43.025 [2024-04-26 16:34:51.883411] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:43.025 [2024-04-26 16:34:51.886136] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.025 16:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.025 16:34:51 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:43.025 16:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.025 16:34:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.025 [2024-04-26 16:34:51.890488] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:43.025 [2024-04-26 16:34:51.892752] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:43.025 [2024-04-26 16:34:51.892773] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:43.025 [2024-04-26 16:34:51.892782] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed080 00:23:43.025 16:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.025 16:34:51 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:43.025 16:34:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:43.025 16:34:51 -- common/autotest_common.sh@10 -- # set +x 00:23:43.025 [2024-04-26 16:34:51.900330] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:43.025 16:34:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:43.025 16:34:51 -- host/bdevperf.sh@38 -- # wait 568161 00:23:43.958 [2024-04-26 16:34:52.896358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:43.958 [2024-04-26 16:34:52.896379] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:43.958 [2024-04-26 16:34:52.896552] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:43.958 [2024-04-26 16:34:52.896563] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:43.958 [2024-04-26 16:34:52.896573] nvme_ctrlr.c:1030:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:43.958 [2024-04-26 16:34:52.899309] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:43.958 [2024-04-26 16:34:52.904148] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:43.958 [2024-04-26 16:34:52.947549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:52.064 00:23:52.064 Latency(us) 00:23:52.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.064 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:52.064 Verification LBA range: start 0x0 length 0x4000 00:23:52.064 Nvme1n1 : 15.00 11941.06 46.64 13613.34 0.00 4990.16 352.61 1035810.73 00:23:52.064 =================================================================================================================== 00:23:52.064 Total : 11941.06 46.64 13613.34 0.00 4990.16 352.61 1035810.73 00:23:52.064 16:35:00 -- host/bdevperf.sh@39 -- # sync 00:23:52.064 16:35:00 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.064 16:35:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.064 16:35:00 -- common/autotest_common.sh@10 -- # set +x 00:23:52.064 16:35:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.064 16:35:00 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:23:52.064 16:35:00 -- host/bdevperf.sh@44 -- # nvmftestfini 00:23:52.064 16:35:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:52.064 16:35:00 -- nvmf/common.sh@117 -- # sync 00:23:52.064 16:35:00 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:52.064 16:35:00 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:52.064 16:35:00 -- nvmf/common.sh@120 -- # set +e 00:23:52.064 16:35:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.064 16:35:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:52.064 rmmod nvme_rdma 00:23:52.064 rmmod nvme_fabrics 00:23:52.064 16:35:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.064 16:35:00 -- nvmf/common.sh@124 -- # set -e 00:23:52.064 16:35:00 -- nvmf/common.sh@125 -- # return 0 00:23:52.064 16:35:00 -- nvmf/common.sh@478 -- # '[' -n 568897 ']' 00:23:52.064 16:35:00 -- nvmf/common.sh@479 -- # killprocess 568897 00:23:52.064 16:35:00 -- common/autotest_common.sh@936 -- # '[' -z 568897 ']' 00:23:52.064 16:35:00 -- common/autotest_common.sh@940 -- # kill -0 568897 00:23:52.064 16:35:00 -- common/autotest_common.sh@941 -- # uname 00:23:52.064 16:35:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:52.064 16:35:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 568897 00:23:52.064 16:35:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:52.064 16:35:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:52.064 16:35:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 568897' 00:23:52.064 killing process with pid 568897 00:23:52.064 16:35:00 -- common/autotest_common.sh@955 -- # kill 568897 00:23:52.064 16:35:00 -- common/autotest_common.sh@960 -- # wait 568897 00:23:52.064 [2024-04-26 16:35:00.635792] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:23:52.064 16:35:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:52.064 16:35:00 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:23:52.064 00:23:52.064 real 0m25.835s 00:23:52.064 user 1m4.913s 00:23:52.064 sys 0m6.586s 00:23:52.064 16:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:52.064 16:35:00 -- common/autotest_common.sh@10 -- # set +x 00:23:52.064 ************************************ 00:23:52.064 END TEST nvmf_bdevperf 00:23:52.064 ************************************ 00:23:52.064 16:35:00 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:52.064 16:35:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:52.064 16:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:52.064 16:35:00 -- common/autotest_common.sh@10 -- # set +x 00:23:52.064 ************************************ 00:23:52.064 START TEST nvmf_target_disconnect 00:23:52.064 ************************************ 00:23:52.064 16:35:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:23:52.323 * Looking for test storage... 00:23:52.323 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:52.323 16:35:01 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.323 16:35:01 -- nvmf/common.sh@7 -- # uname -s 00:23:52.323 16:35:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.323 16:35:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.323 16:35:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.323 16:35:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.323 16:35:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.323 16:35:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.323 16:35:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.323 16:35:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.323 16:35:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.323 16:35:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.323 16:35:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:23:52.323 16:35:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:23:52.323 16:35:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.323 16:35:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.323 16:35:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.323 16:35:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.323 16:35:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:52.323 16:35:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.323 16:35:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.323 16:35:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.323 16:35:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.323 16:35:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.323 16:35:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.323 16:35:01 -- paths/export.sh@5 -- # export PATH 00:23:52.323 16:35:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.323 16:35:01 -- nvmf/common.sh@47 -- # : 0 00:23:52.323 16:35:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.323 16:35:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.323 16:35:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.323 16:35:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.323 16:35:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.323 16:35:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.323 16:35:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.323 16:35:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.323 16:35:01 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:23:52.323 16:35:01 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:23:52.323 16:35:01 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:23:52.323 16:35:01 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:23:52.323 16:35:01 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:23:52.323 16:35:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.323 16:35:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:52.323 16:35:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:52.323 16:35:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:52.323 16:35:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.323 16:35:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.323 16:35:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.323 16:35:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:52.323 16:35:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:52.323 16:35:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.323 16:35:01 -- common/autotest_common.sh@10 -- # set +x 00:23:58.888 16:35:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:58.888 16:35:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.888 16:35:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.888 16:35:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.888 16:35:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.888 16:35:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.888 16:35:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.888 16:35:06 -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.888 16:35:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.888 16:35:06 -- nvmf/common.sh@296 -- # e810=() 00:23:58.888 16:35:06 -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.888 16:35:06 -- nvmf/common.sh@297 -- # x722=() 00:23:58.888 16:35:06 -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.888 16:35:06 -- nvmf/common.sh@298 -- # mlx=() 00:23:58.888 16:35:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.888 16:35:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.888 16:35:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.888 16:35:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.888 16:35:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.888 16:35:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.888 16:35:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.889 16:35:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.889 16:35:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.889 16:35:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.889 16:35:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.889 16:35:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.889 16:35:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.889 16:35:06 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:58.889 16:35:06 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:58.889 16:35:06 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:58.889 16:35:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.889 16:35:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:23:58.889 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:23:58.889 16:35:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:58.889 16:35:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:23:58.889 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:23:58.889 16:35:06 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:58.889 16:35:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.889 16:35:06 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.889 16:35:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:58.889 16:35:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.889 16:35:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:23:58.889 Found net devices under 0000:18:00.0: mlx_0_0 00:23:58.889 16:35:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.889 16:35:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.889 16:35:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:58.889 16:35:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.889 16:35:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:23:58.889 Found net devices under 0000:18:00.1: mlx_0_1 00:23:58.889 16:35:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.889 16:35:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:58.889 16:35:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:58.889 16:35:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@409 -- # rdma_device_init 00:23:58.889 16:35:06 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:23:58.889 16:35:06 -- nvmf/common.sh@58 -- # uname 00:23:58.889 16:35:06 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:58.889 16:35:06 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:58.889 16:35:06 -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:58.889 16:35:06 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:58.889 16:35:06 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:58.889 16:35:06 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:58.889 16:35:06 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:58.889 16:35:06 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:58.889 16:35:06 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:23:58.889 16:35:06 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:58.889 16:35:06 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:58.889 16:35:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:58.889 16:35:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:58.889 16:35:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:58.889 16:35:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:58.889 16:35:06 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:58.889 16:35:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:58.889 16:35:06 -- nvmf/common.sh@105 -- # continue 2 00:23:58.889 16:35:06 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.889 16:35:06 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:58.889 16:35:06 -- nvmf/common.sh@105 -- # continue 2 00:23:58.889 16:35:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:58.889 16:35:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:58.889 16:35:06 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:58.889 16:35:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:58.889 16:35:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:58.889 16:35:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:58.889 16:35:06 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:58.889 16:35:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:58.889 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:58.889 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:23:58.889 altname enp24s0f0np0 00:23:58.889 altname ens785f0np0 00:23:58.889 inet 192.168.100.8/24 scope global mlx_0_0 00:23:58.889 valid_lft forever preferred_lft forever 00:23:58.889 16:35:06 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:58.889 16:35:06 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:58.889 16:35:06 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:58.889 16:35:06 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:58.889 16:35:06 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:58.889 16:35:06 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:58.889 16:35:06 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:58.889 16:35:06 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:58.889 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:58.889 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:23:58.889 altname enp24s0f1np1 00:23:58.889 altname ens785f1np1 00:23:58.889 inet 192.168.100.9/24 scope global mlx_0_1 00:23:58.889 valid_lft forever preferred_lft forever 00:23:58.889 16:35:06 -- nvmf/common.sh@411 -- # return 0 00:23:58.889 16:35:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:58.889 16:35:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:58.889 16:35:06 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:23:58.889 16:35:06 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:23:58.889 16:35:06 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:58.889 16:35:06 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:58.889 16:35:06 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:58.889 16:35:06 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:58.889 16:35:06 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:58.889 16:35:07 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:58.889 16:35:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:58.889 16:35:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.889 16:35:07 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:58.889 16:35:07 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:58.889 16:35:07 -- nvmf/common.sh@105 -- # continue 2 00:23:58.889 16:35:07 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:58.889 16:35:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.889 16:35:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:58.889 16:35:07 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.889 16:35:07 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:58.889 16:35:07 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:58.889 16:35:07 -- nvmf/common.sh@105 -- # continue 2 00:23:58.889 16:35:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:58.889 16:35:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:58.889 16:35:07 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:58.889 16:35:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:58.889 16:35:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:58.889 16:35:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:58.889 16:35:07 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:58.889 16:35:07 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:58.889 16:35:07 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:58.889 16:35:07 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:58.889 16:35:07 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:58.889 16:35:07 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:58.889 16:35:07 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:23:58.889 192.168.100.9' 00:23:58.889 16:35:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:58.889 192.168.100.9' 00:23:58.890 16:35:07 -- nvmf/common.sh@446 -- # head -n 1 00:23:58.890 16:35:07 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:58.890 16:35:07 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:23:58.890 192.168.100.9' 00:23:58.890 16:35:07 -- nvmf/common.sh@447 -- # head -n 1 00:23:58.890 16:35:07 -- nvmf/common.sh@447 -- # tail -n +2 00:23:58.890 16:35:07 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:58.890 16:35:07 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:23:58.890 16:35:07 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:58.890 16:35:07 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:23:58.890 16:35:07 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:23:58.890 16:35:07 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:23:58.890 16:35:07 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:23:58.890 16:35:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:58.890 16:35:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:58.890 16:35:07 -- common/autotest_common.sh@10 -- # set +x 00:23:58.890 ************************************ 00:23:58.890 START TEST nvmf_target_disconnect_tc1 00:23:58.890 ************************************ 00:23:58.890 16:35:07 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:23:58.890 16:35:07 -- host/target_disconnect.sh@32 -- # set +e 00:23:58.890 16:35:07 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:58.890 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.890 [2024-04-26 16:35:07.344686] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:58.890 [2024-04-26 16:35:07.344794] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:58.890 [2024-04-26 16:35:07.344824] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7080 00:23:59.456 [2024-04-26 16:35:08.348423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:59.456 [2024-04-26 16:35:08.348484] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:59.456 [2024-04-26 16:35:08.348527] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:23:59.456 [2024-04-26 16:35:08.348561] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:59.456 [2024-04-26 16:35:08.348574] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:23:59.456 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:23:59.456 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:23:59.456 Initializing NVMe Controllers 00:23:59.456 16:35:08 -- host/target_disconnect.sh@33 -- # trap - ERR 00:23:59.456 16:35:08 -- host/target_disconnect.sh@33 -- # print_backtrace 00:23:59.456 16:35:08 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:23:59.457 16:35:08 -- common/autotest_common.sh@1139 -- # return 0 00:23:59.457 16:35:08 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:23:59.457 16:35:08 -- host/target_disconnect.sh@41 -- # set -e 00:23:59.457 00:23:59.457 real 0m1.129s 00:23:59.457 user 0m0.827s 00:23:59.457 sys 0m0.293s 00:23:59.457 16:35:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:59.457 16:35:08 -- common/autotest_common.sh@10 -- # set +x 00:23:59.457 ************************************ 00:23:59.457 END TEST nvmf_target_disconnect_tc1 00:23:59.457 ************************************ 00:23:59.457 16:35:08 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:23:59.457 16:35:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:59.457 16:35:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:59.457 16:35:08 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 ************************************ 00:23:59.714 START TEST nvmf_target_disconnect_tc2 00:23:59.714 ************************************ 00:23:59.714 16:35:08 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:23:59.714 16:35:08 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:23:59.714 16:35:08 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:23:59.714 16:35:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:59.714 16:35:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:59.714 16:35:08 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 16:35:08 -- nvmf/common.sh@470 -- # nvmfpid=573323 00:23:59.714 16:35:08 -- nvmf/common.sh@471 -- # waitforlisten 573323 00:23:59.714 16:35:08 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:23:59.714 16:35:08 -- common/autotest_common.sh@817 -- # '[' -z 573323 ']' 00:23:59.714 16:35:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.714 16:35:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:59.714 16:35:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.714 16:35:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:59.714 16:35:08 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 [2024-04-26 16:35:08.633869] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:23:59.714 [2024-04-26 16:35:08.633924] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.714 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.714 [2024-04-26 16:35:08.720219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.971 [2024-04-26 16:35:08.806556] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.971 [2024-04-26 16:35:08.806604] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.971 [2024-04-26 16:35:08.806614] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.971 [2024-04-26 16:35:08.806622] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.971 [2024-04-26 16:35:08.806629] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.971 [2024-04-26 16:35:08.806756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:23:59.971 [2024-04-26 16:35:08.806868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:23:59.971 [2024-04-26 16:35:08.806970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:59.971 [2024-04-26 16:35:08.806971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:00.534 16:35:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:00.534 16:35:09 -- common/autotest_common.sh@850 -- # return 0 00:24:00.534 16:35:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:00.534 16:35:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:00.534 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:00.534 16:35:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.534 16:35:09 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:00.534 16:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.534 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:00.534 Malloc0 00:24:00.534 16:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.534 16:35:09 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:00.534 16:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.534 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:00.534 [2024-04-26 16:35:09.548417] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d373f0/0x1d43000) succeed. 00:24:00.535 [2024-04-26 16:35:09.559190] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d38a30/0x1de3090) succeed. 00:24:00.791 16:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.791 16:35:09 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:00.791 16:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.791 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:00.791 16:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.791 16:35:09 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:00.791 16:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.791 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:00.791 16:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.791 16:35:09 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:00.791 16:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.791 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:00.791 [2024-04-26 16:35:09.702035] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:00.791 16:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.791 16:35:09 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:00.791 16:35:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.791 16:35:09 -- common/autotest_common.sh@10 -- # set +x 00:24:00.791 16:35:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.791 16:35:09 -- host/target_disconnect.sh@50 -- # reconnectpid=573522 00:24:00.791 16:35:09 -- host/target_disconnect.sh@52 -- # sleep 2 00:24:00.791 16:35:09 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:00.791 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.770 16:35:11 -- host/target_disconnect.sh@53 -- # kill -9 573323 00:24:02.770 16:35:11 -- host/target_disconnect.sh@55 -- # sleep 2 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Write completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 Read completed with error (sct=0, sc=8) 00:24:04.143 starting I/O failed 00:24:04.143 [2024-04-26 16:35:12.901476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:04.707 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 573323 Killed "${NVMF_APP[@]}" "$@" 00:24:04.707 16:35:13 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:24:04.707 16:35:13 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:04.707 16:35:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:04.707 16:35:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:04.707 16:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:04.964 16:35:13 -- nvmf/common.sh@470 -- # nvmfpid=574037 00:24:04.964 16:35:13 -- nvmf/common.sh@471 -- # waitforlisten 574037 00:24:04.964 16:35:13 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:04.964 16:35:13 -- common/autotest_common.sh@817 -- # '[' -z 574037 ']' 00:24:04.964 16:35:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.964 16:35:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:04.964 16:35:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.964 16:35:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:04.964 16:35:13 -- common/autotest_common.sh@10 -- # set +x 00:24:04.964 [2024-04-26 16:35:13.784340] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:24:04.964 [2024-04-26 16:35:13.784408] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.964 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.964 [2024-04-26 16:35:13.872641] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Write completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 Read completed with error (sct=0, sc=8) 00:24:04.964 starting I/O failed 00:24:04.964 [2024-04-26 16:35:13.905902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:04.964 [2024-04-26 16:35:13.949090] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.964 [2024-04-26 16:35:13.949137] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.964 [2024-04-26 16:35:13.949147] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.964 [2024-04-26 16:35:13.949156] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.964 [2024-04-26 16:35:13.949163] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.964 [2024-04-26 16:35:13.949294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:04.964 [2024-04-26 16:35:13.949395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:04.964 [2024-04-26 16:35:13.949493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:04.964 [2024-04-26 16:35:13.949495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:05.897 16:35:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:05.897 16:35:14 -- common/autotest_common.sh@850 -- # return 0 00:24:05.897 16:35:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:05.897 16:35:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:05.897 16:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:05.897 16:35:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.897 16:35:14 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:05.897 16:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.897 16:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:05.897 Malloc0 00:24:05.897 16:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.897 16:35:14 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:05.897 16:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.897 16:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:05.897 [2024-04-26 16:35:14.687465] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeac3f0/0xeb8000) succeed. 00:24:05.898 [2024-04-26 16:35:14.699695] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeada30/0xf58090) succeed. 00:24:05.898 16:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.898 16:35:14 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:05.898 16:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.898 16:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:05.898 16:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.898 16:35:14 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.898 16:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.898 16:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:05.898 16:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.898 16:35:14 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:05.898 16:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.898 16:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:05.898 [2024-04-26 16:35:14.842512] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:05.898 16:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.898 16:35:14 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:05.898 16:35:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.898 16:35:14 -- common/autotest_common.sh@10 -- # set +x 00:24:05.898 16:35:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.898 16:35:14 -- host/target_disconnect.sh@58 -- # wait 573522 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Read completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 Write completed with error (sct=0, sc=8) 00:24:05.898 starting I/O failed 00:24:05.898 [2024-04-26 16:35:14.910425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:05.898 [2024-04-26 16:35:14.916161] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:05.898 [2024-04-26 16:35:14.916214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:05.898 [2024-04-26 16:35:14.916235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:05.898 [2024-04-26 16:35:14.916245] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:05.898 [2024-04-26 16:35:14.916255] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:14.926232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:14.936169] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:14.936207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:14.936225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:14.936235] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:14.936244] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:14.946233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:14.956151] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:14.956195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:14.956212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:14.956223] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:14.956232] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:14.966264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:14.976217] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:14.976260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:14.976282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:14.976291] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:14.976300] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:14.986468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:14.996246] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:14.996292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:14.996309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:14.996319] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:14.996328] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:15.006542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:15.016432] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:15.016470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:15.016488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:15.016498] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:15.016506] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:15.026621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:15.036298] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:15.036332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:15.036356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:15.036366] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:15.036375] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:15.046557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:15.056519] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:15.056561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:15.056578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:15.056588] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:15.056600] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:15.066744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:15.076479] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:15.076521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:15.076538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:15.076548] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:15.076556] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:15.086778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:15.096508] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:15.096549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:15.096566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:15.096575] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:15.096584] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.157 [2024-04-26 16:35:15.106729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.157 qpair failed and we were unable to recover it. 00:24:06.157 [2024-04-26 16:35:15.116520] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.157 [2024-04-26 16:35:15.116560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.157 [2024-04-26 16:35:15.116577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.157 [2024-04-26 16:35:15.116586] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.157 [2024-04-26 16:35:15.116595] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.158 [2024-04-26 16:35:15.126830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.158 qpair failed and we were unable to recover it. 00:24:06.158 [2024-04-26 16:35:15.136647] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.158 [2024-04-26 16:35:15.136687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.158 [2024-04-26 16:35:15.136703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.158 [2024-04-26 16:35:15.136713] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.158 [2024-04-26 16:35:15.136721] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.158 [2024-04-26 16:35:15.146937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.158 qpair failed and we were unable to recover it. 00:24:06.158 [2024-04-26 16:35:15.156668] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.158 [2024-04-26 16:35:15.156718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.158 [2024-04-26 16:35:15.156735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.158 [2024-04-26 16:35:15.156745] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.158 [2024-04-26 16:35:15.156754] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.158 [2024-04-26 16:35:15.166979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.158 qpair failed and we were unable to recover it. 00:24:06.158 [2024-04-26 16:35:15.176789] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.158 [2024-04-26 16:35:15.176837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.158 [2024-04-26 16:35:15.176854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.158 [2024-04-26 16:35:15.176864] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.158 [2024-04-26 16:35:15.176872] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.416 [2024-04-26 16:35:15.187133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.416 qpair failed and we were unable to recover it. 00:24:06.416 [2024-04-26 16:35:15.196782] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.416 [2024-04-26 16:35:15.196816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.416 [2024-04-26 16:35:15.196832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.416 [2024-04-26 16:35:15.196842] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.416 [2024-04-26 16:35:15.196850] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.416 [2024-04-26 16:35:15.207123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.416 qpair failed and we were unable to recover it. 00:24:06.416 [2024-04-26 16:35:15.217072] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.416 [2024-04-26 16:35:15.217113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.416 [2024-04-26 16:35:15.217129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.416 [2024-04-26 16:35:15.217139] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.416 [2024-04-26 16:35:15.217148] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.416 [2024-04-26 16:35:15.227165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.416 qpair failed and we were unable to recover it. 00:24:06.416 [2024-04-26 16:35:15.236998] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.416 [2024-04-26 16:35:15.237034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.416 [2024-04-26 16:35:15.237051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.416 [2024-04-26 16:35:15.237063] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.416 [2024-04-26 16:35:15.237072] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.416 [2024-04-26 16:35:15.247281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.416 qpair failed and we were unable to recover it. 00:24:06.416 [2024-04-26 16:35:15.257147] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.416 [2024-04-26 16:35:15.257183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.416 [2024-04-26 16:35:15.257200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.416 [2024-04-26 16:35:15.257209] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.416 [2024-04-26 16:35:15.257218] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.416 [2024-04-26 16:35:15.267101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.416 qpair failed and we were unable to recover it. 00:24:06.416 [2024-04-26 16:35:15.277118] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.277154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.277171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.277181] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.277190] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.417 [2024-04-26 16:35:15.287247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.417 qpair failed and we were unable to recover it. 00:24:06.417 [2024-04-26 16:35:15.297168] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.297206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.297223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.297232] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.297241] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.417 [2024-04-26 16:35:15.307268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.417 qpair failed and we were unable to recover it. 00:24:06.417 [2024-04-26 16:35:15.317166] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.317209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.317226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.317235] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.317244] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.417 [2024-04-26 16:35:15.327363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.417 qpair failed and we were unable to recover it. 00:24:06.417 [2024-04-26 16:35:15.337168] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.337208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.337224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.337234] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.337242] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.417 [2024-04-26 16:35:15.347458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.417 qpair failed and we were unable to recover it. 00:24:06.417 [2024-04-26 16:35:15.357321] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.357364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.357381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.357390] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.357399] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.417 [2024-04-26 16:35:15.367442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.417 qpair failed and we were unable to recover it. 00:24:06.417 [2024-04-26 16:35:15.377360] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.377398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.377414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.377424] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.377433] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.417 [2024-04-26 16:35:15.387701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.417 qpair failed and we were unable to recover it. 00:24:06.417 [2024-04-26 16:35:15.397378] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.397418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.397434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.397443] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.397452] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.417 [2024-04-26 16:35:15.407491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.417 qpair failed and we were unable to recover it. 00:24:06.417 [2024-04-26 16:35:15.417484] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.417526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.417545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.417555] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.417564] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.417 [2024-04-26 16:35:15.427880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.417 qpair failed and we were unable to recover it. 00:24:06.417 [2024-04-26 16:35:15.437527] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.417 [2024-04-26 16:35:15.437583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.417 [2024-04-26 16:35:15.437601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.417 [2024-04-26 16:35:15.437612] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.417 [2024-04-26 16:35:15.437621] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.675 [2024-04-26 16:35:15.447833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.675 qpair failed and we were unable to recover it. 00:24:06.675 [2024-04-26 16:35:15.457623] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.675 [2024-04-26 16:35:15.457666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.675 [2024-04-26 16:35:15.457682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.675 [2024-04-26 16:35:15.457692] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.675 [2024-04-26 16:35:15.457701] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.467599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.477704] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.477744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.477761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.477771] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.477780] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.487680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.497708] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.497743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.497759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.497769] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.497781] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.508110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.517857] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.517892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.517909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.517918] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.517927] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.527864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.537801] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.537841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.537858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.537868] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.537877] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.548228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.557880] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.557922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.557939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.557948] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.557957] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.568109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.577969] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.578011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.578027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.578037] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.578046] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.588400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.597992] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.598027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.598044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.598053] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.598062] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.608144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.618125] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.618168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.618184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.618194] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.618202] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.628362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.638227] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.638266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.638283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.638293] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.638302] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.648344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.658242] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.658279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.658296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.658306] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.658315] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.668421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.678280] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.678321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.678338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.678356] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.678365] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.676 [2024-04-26 16:35:15.688501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.676 qpair failed and we were unable to recover it. 00:24:06.676 [2024-04-26 16:35:15.698265] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.676 [2024-04-26 16:35:15.698322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.676 [2024-04-26 16:35:15.698339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.676 [2024-04-26 16:35:15.698355] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.676 [2024-04-26 16:35:15.698365] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.934 [2024-04-26 16:35:15.708501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.934 qpair failed and we were unable to recover it. 00:24:06.934 [2024-04-26 16:35:15.718329] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.934 [2024-04-26 16:35:15.718373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.934 [2024-04-26 16:35:15.718390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.934 [2024-04-26 16:35:15.718400] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.934 [2024-04-26 16:35:15.718408] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.934 [2024-04-26 16:35:15.728619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.934 qpair failed and we were unable to recover it. 00:24:06.934 [2024-04-26 16:35:15.738392] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.934 [2024-04-26 16:35:15.738433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.934 [2024-04-26 16:35:15.738449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.934 [2024-04-26 16:35:15.738459] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.934 [2024-04-26 16:35:15.738468] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.934 [2024-04-26 16:35:15.748695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.934 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.758447] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.758484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.758501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.758510] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.758519] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.768592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.778608] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.778654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.778671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.778680] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.778689] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.788620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.798613] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.798656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.798673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.798683] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.798691] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.808816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.818701] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.818740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.818757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.818766] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.818775] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.828732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.838683] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.838717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.838734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.838743] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.838752] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.848983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.858826] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.858864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.858884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.858893] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.858902] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.868858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.878868] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.878910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.878927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.878937] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.878945] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.889007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.898832] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.898868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.898885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.898895] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.898903] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.909155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.919033] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.919068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.919084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.919094] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.919103] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.929066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:06.935 [2024-04-26 16:35:15.938997] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:06.935 [2024-04-26 16:35:15.939036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:06.935 [2024-04-26 16:35:15.939052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:06.935 [2024-04-26 16:35:15.939062] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:06.935 [2024-04-26 16:35:15.939074] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:06.935 [2024-04-26 16:35:15.949070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:06.935 qpair failed and we were unable to recover it. 00:24:07.193 [2024-04-26 16:35:15.959126] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.193 [2024-04-26 16:35:15.959178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.193 [2024-04-26 16:35:15.959196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.193 [2024-04-26 16:35:15.959207] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.193 [2024-04-26 16:35:15.959217] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.193 [2024-04-26 16:35:15.969203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.193 qpair failed and we were unable to recover it. 00:24:07.193 [2024-04-26 16:35:15.979153] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.193 [2024-04-26 16:35:15.979190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.193 [2024-04-26 16:35:15.979207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:15.979216] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:15.979225] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:15.989362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:15.999233] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:15.999269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:15.999285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:15.999295] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:15.999304] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.009350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.019242] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.019284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.019300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.019310] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.019319] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.029399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.039352] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.039402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.039419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.039428] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.039437] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.049486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.059405] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.059447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.059464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.059474] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.059482] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.069619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.079473] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.079507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.079524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.079533] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.079542] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.089477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.099484] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.099522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.099538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.099548] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.099557] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.109712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.119436] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.119475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.119491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.119503] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.119512] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.129717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.139559] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.139594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.139611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.139620] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.139629] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.149862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.159581] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.159617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.159634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.159644] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.159652] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.169867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.179684] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.179723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.179740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.179749] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.179758] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.189875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.194 [2024-04-26 16:35:16.199793] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.194 [2024-04-26 16:35:16.199832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.194 [2024-04-26 16:35:16.199848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.194 [2024-04-26 16:35:16.199857] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.194 [2024-04-26 16:35:16.199866] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.194 [2024-04-26 16:35:16.210024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.194 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.219847] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.219886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.219903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.219913] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.219922] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.230064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.239811] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.239855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.239872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.239881] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.239890] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.249926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.259822] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.259863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.259880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.259890] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.259899] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.270063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.280013] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.280054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.280071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.280080] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.280090] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.290227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.300084] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.300127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.300147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.300157] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.300166] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.310055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.320122] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.320163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.320179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.320189] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.320197] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.330417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.340114] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.340153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.340170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.340179] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.340188] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.350305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.360237] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.360280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.360297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.360306] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.360315] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.370320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.380325] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.380366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.380383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.380392] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.453 [2024-04-26 16:35:16.380404] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.453 [2024-04-26 16:35:16.390374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.453 qpair failed and we were unable to recover it. 00:24:07.453 [2024-04-26 16:35:16.400416] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.453 [2024-04-26 16:35:16.400452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.453 [2024-04-26 16:35:16.400469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.453 [2024-04-26 16:35:16.400478] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.454 [2024-04-26 16:35:16.400488] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.454 [2024-04-26 16:35:16.410511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.454 qpair failed and we were unable to recover it. 00:24:07.454 [2024-04-26 16:35:16.420361] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.454 [2024-04-26 16:35:16.420403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.454 [2024-04-26 16:35:16.420419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.454 [2024-04-26 16:35:16.420429] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.454 [2024-04-26 16:35:16.420437] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.454 [2024-04-26 16:35:16.430634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.454 qpair failed and we were unable to recover it. 00:24:07.454 [2024-04-26 16:35:16.440514] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.454 [2024-04-26 16:35:16.440560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.454 [2024-04-26 16:35:16.440576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.454 [2024-04-26 16:35:16.440586] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.454 [2024-04-26 16:35:16.440595] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.454 [2024-04-26 16:35:16.450802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.454 qpair failed and we were unable to recover it. 00:24:07.454 [2024-04-26 16:35:16.460571] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.454 [2024-04-26 16:35:16.460607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.454 [2024-04-26 16:35:16.460623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.454 [2024-04-26 16:35:16.460633] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.454 [2024-04-26 16:35:16.460641] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.454 [2024-04-26 16:35:16.470602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.454 qpair failed and we were unable to recover it. 00:24:07.712 [2024-04-26 16:35:16.480667] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.712 [2024-04-26 16:35:16.480721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.712 [2024-04-26 16:35:16.480740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.712 [2024-04-26 16:35:16.480750] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.712 [2024-04-26 16:35:16.480759] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.712 [2024-04-26 16:35:16.490984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.712 qpair failed and we were unable to recover it. 00:24:07.712 [2024-04-26 16:35:16.500614] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.712 [2024-04-26 16:35:16.500653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.712 [2024-04-26 16:35:16.500670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.712 [2024-04-26 16:35:16.500679] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.712 [2024-04-26 16:35:16.500688] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.712 [2024-04-26 16:35:16.510974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.712 qpair failed and we were unable to recover it. 00:24:07.712 [2024-04-26 16:35:16.520669] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.712 [2024-04-26 16:35:16.520714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.712 [2024-04-26 16:35:16.520731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.712 [2024-04-26 16:35:16.520740] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.712 [2024-04-26 16:35:16.520749] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.712 [2024-04-26 16:35:16.530957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.712 qpair failed and we were unable to recover it. 00:24:07.712 [2024-04-26 16:35:16.540803] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.712 [2024-04-26 16:35:16.540841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.712 [2024-04-26 16:35:16.540858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.712 [2024-04-26 16:35:16.540867] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.712 [2024-04-26 16:35:16.540876] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.712 [2024-04-26 16:35:16.551109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.712 qpair failed and we were unable to recover it. 00:24:07.712 [2024-04-26 16:35:16.560877] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.712 [2024-04-26 16:35:16.560911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.712 [2024-04-26 16:35:16.560928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.712 [2024-04-26 16:35:16.560941] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.712 [2024-04-26 16:35:16.560950] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.712 [2024-04-26 16:35:16.570997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.712 qpair failed and we were unable to recover it. 00:24:07.712 [2024-04-26 16:35:16.580801] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.712 [2024-04-26 16:35:16.580841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.712 [2024-04-26 16:35:16.580858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.712 [2024-04-26 16:35:16.580867] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.712 [2024-04-26 16:35:16.580876] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.712 [2024-04-26 16:35:16.591145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.712 qpair failed and we were unable to recover it. 00:24:07.712 [2024-04-26 16:35:16.600925] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.712 [2024-04-26 16:35:16.600963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.713 [2024-04-26 16:35:16.600979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.713 [2024-04-26 16:35:16.600989] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.713 [2024-04-26 16:35:16.600998] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.713 [2024-04-26 16:35:16.611078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.713 qpair failed and we were unable to recover it. 00:24:07.713 [2024-04-26 16:35:16.621017] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.713 [2024-04-26 16:35:16.621056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.713 [2024-04-26 16:35:16.621073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.713 [2024-04-26 16:35:16.621082] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.713 [2024-04-26 16:35:16.621091] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.713 [2024-04-26 16:35:16.631308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.713 qpair failed and we were unable to recover it. 00:24:07.713 [2024-04-26 16:35:16.640980] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.713 [2024-04-26 16:35:16.641015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.713 [2024-04-26 16:35:16.641032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.713 [2024-04-26 16:35:16.641041] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.713 [2024-04-26 16:35:16.641050] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.713 [2024-04-26 16:35:16.651309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.713 qpair failed and we were unable to recover it. 00:24:07.713 [2024-04-26 16:35:16.661128] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.713 [2024-04-26 16:35:16.661169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.713 [2024-04-26 16:35:16.661185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.713 [2024-04-26 16:35:16.661195] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.713 [2024-04-26 16:35:16.661204] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.713 [2024-04-26 16:35:16.671367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.713 qpair failed and we were unable to recover it. 00:24:07.713 [2024-04-26 16:35:16.681230] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.713 [2024-04-26 16:35:16.681277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.713 [2024-04-26 16:35:16.681294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.713 [2024-04-26 16:35:16.681304] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.713 [2024-04-26 16:35:16.681313] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.713 [2024-04-26 16:35:16.691314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.713 qpair failed and we were unable to recover it. 00:24:07.713 [2024-04-26 16:35:16.701335] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.713 [2024-04-26 16:35:16.701375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.713 [2024-04-26 16:35:16.701392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.713 [2024-04-26 16:35:16.701402] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.713 [2024-04-26 16:35:16.701411] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.713 [2024-04-26 16:35:16.711390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.713 qpair failed and we were unable to recover it. 00:24:07.713 [2024-04-26 16:35:16.721395] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.713 [2024-04-26 16:35:16.721435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.713 [2024-04-26 16:35:16.721451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.713 [2024-04-26 16:35:16.721461] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.713 [2024-04-26 16:35:16.721470] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.713 [2024-04-26 16:35:16.731416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.713 qpair failed and we were unable to recover it. 00:24:07.971 [2024-04-26 16:35:16.741435] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.971 [2024-04-26 16:35:16.741477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.971 [2024-04-26 16:35:16.741497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.971 [2024-04-26 16:35:16.741507] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.971 [2024-04-26 16:35:16.741516] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.971 [2024-04-26 16:35:16.751541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.971 qpair failed and we were unable to recover it. 00:24:07.971 [2024-04-26 16:35:16.761388] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.971 [2024-04-26 16:35:16.761425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.971 [2024-04-26 16:35:16.761442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.971 [2024-04-26 16:35:16.761451] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.971 [2024-04-26 16:35:16.761460] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.971 [2024-04-26 16:35:16.771579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.971 qpair failed and we were unable to recover it. 00:24:07.971 [2024-04-26 16:35:16.781549] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.971 [2024-04-26 16:35:16.781589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.971 [2024-04-26 16:35:16.781606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.971 [2024-04-26 16:35:16.781616] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.971 [2024-04-26 16:35:16.781625] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.971 [2024-04-26 16:35:16.791713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.971 qpair failed and we were unable to recover it. 00:24:07.971 [2024-04-26 16:35:16.801565] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.971 [2024-04-26 16:35:16.801601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.971 [2024-04-26 16:35:16.801618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.971 [2024-04-26 16:35:16.801627] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.971 [2024-04-26 16:35:16.801636] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.971 [2024-04-26 16:35:16.811638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.971 qpair failed and we were unable to recover it. 00:24:07.971 [2024-04-26 16:35:16.821531] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.971 [2024-04-26 16:35:16.821570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.971 [2024-04-26 16:35:16.821587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.971 [2024-04-26 16:35:16.821597] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.971 [2024-04-26 16:35:16.821609] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.971 [2024-04-26 16:35:16.831857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.971 qpair failed and we were unable to recover it. 00:24:07.971 [2024-04-26 16:35:16.841680] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.971 [2024-04-26 16:35:16.841723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.971 [2024-04-26 16:35:16.841740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.971 [2024-04-26 16:35:16.841749] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.971 [2024-04-26 16:35:16.841758] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.972 [2024-04-26 16:35:16.851872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.972 qpair failed and we were unable to recover it. 00:24:07.972 [2024-04-26 16:35:16.861728] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.972 [2024-04-26 16:35:16.861766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.972 [2024-04-26 16:35:16.861783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.972 [2024-04-26 16:35:16.861792] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.972 [2024-04-26 16:35:16.861801] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.972 [2024-04-26 16:35:16.872014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.972 qpair failed and we were unable to recover it. 00:24:07.972 [2024-04-26 16:35:16.881770] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.972 [2024-04-26 16:35:16.881807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.972 [2024-04-26 16:35:16.881824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.972 [2024-04-26 16:35:16.881833] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.972 [2024-04-26 16:35:16.881842] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.972 [2024-04-26 16:35:16.892043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.972 qpair failed and we were unable to recover it. 00:24:07.972 [2024-04-26 16:35:16.902036] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.972 [2024-04-26 16:35:16.902071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.972 [2024-04-26 16:35:16.902087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.972 [2024-04-26 16:35:16.902097] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.972 [2024-04-26 16:35:16.902106] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.972 [2024-04-26 16:35:16.911836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.972 qpair failed and we were unable to recover it. 00:24:07.972 [2024-04-26 16:35:16.921926] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.972 [2024-04-26 16:35:16.921967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.972 [2024-04-26 16:35:16.921984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.972 [2024-04-26 16:35:16.921993] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.972 [2024-04-26 16:35:16.922002] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.972 [2024-04-26 16:35:16.932053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.972 qpair failed and we were unable to recover it. 00:24:07.972 [2024-04-26 16:35:16.941948] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.972 [2024-04-26 16:35:16.941986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.972 [2024-04-26 16:35:16.942002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.972 [2024-04-26 16:35:16.942011] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.972 [2024-04-26 16:35:16.942020] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.972 [2024-04-26 16:35:16.952250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.972 qpair failed and we were unable to recover it. 00:24:07.972 [2024-04-26 16:35:16.961890] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.972 [2024-04-26 16:35:16.961927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.972 [2024-04-26 16:35:16.961944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.972 [2024-04-26 16:35:16.961954] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.972 [2024-04-26 16:35:16.961963] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.972 [2024-04-26 16:35:16.972144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.972 qpair failed and we were unable to recover it. 00:24:07.972 [2024-04-26 16:35:16.982054] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:07.972 [2024-04-26 16:35:16.982092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:07.972 [2024-04-26 16:35:16.982109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:07.972 [2024-04-26 16:35:16.982118] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:07.972 [2024-04-26 16:35:16.982127] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:07.972 [2024-04-26 16:35:16.992261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:07.972 qpair failed and we were unable to recover it. 00:24:08.230 [2024-04-26 16:35:17.002152] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.230 [2024-04-26 16:35:17.002195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.230 [2024-04-26 16:35:17.002212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.230 [2024-04-26 16:35:17.002225] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.230 [2024-04-26 16:35:17.002234] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.230 [2024-04-26 16:35:17.012214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.230 qpair failed and we were unable to recover it. 00:24:08.230 [2024-04-26 16:35:17.022264] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.230 [2024-04-26 16:35:17.022304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.230 [2024-04-26 16:35:17.022320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.022330] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.022339] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.032201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.042265] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.042298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.042316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.042326] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.042334] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.052401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.062311] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.062361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.062379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.062389] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.062398] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.072427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.082408] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.082454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.082471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.082480] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.082489] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.092600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.102441] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.102478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.102495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.102504] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.102513] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.112739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.122411] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.122451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.122467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.122477] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.122486] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.132610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.142641] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.142682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.142699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.142708] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.142717] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.152718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.162579] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.162620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.162637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.162646] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.162655] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.172963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.182616] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.182655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.182676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.182685] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.182694] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.192928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.202764] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.202801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.202818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.202828] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.202837] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.212878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.222794] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.222835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.222851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.222861] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.222870] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.232951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.231 [2024-04-26 16:35:17.242919] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.231 [2024-04-26 16:35:17.242958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.231 [2024-04-26 16:35:17.242975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.231 [2024-04-26 16:35:17.242985] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.231 [2024-04-26 16:35:17.242993] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.231 [2024-04-26 16:35:17.253042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.231 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.262858] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.262894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.262911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.262920] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.262932] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.273187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.282930] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.282969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.282985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.282995] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.283003] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.293111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.303031] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.303069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.303086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.303095] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.303104] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.313022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.323036] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.323077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.323093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.323103] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.323111] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.333116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.343163] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.343208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.343225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.343235] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.343244] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.353375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.363209] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.363250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.363267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.363276] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.363285] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.373570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.383271] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.383310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.383326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.383336] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.383349] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.393325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.403266] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.403304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.403320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.403329] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.403338] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.413425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.423372] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.423410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.423426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.423436] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.423445] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.433554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.443512] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.443546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.443563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.443577] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.443586] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.453640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.463442] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.463481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.463497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.463507] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.463516] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.473688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.483488] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.483527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.483544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.483554] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.483562] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.490 [2024-04-26 16:35:17.493720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.490 qpair failed and we were unable to recover it. 00:24:08.490 [2024-04-26 16:35:17.503569] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.490 [2024-04-26 16:35:17.503606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.490 [2024-04-26 16:35:17.503622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.490 [2024-04-26 16:35:17.503632] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.490 [2024-04-26 16:35:17.503641] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.748 [2024-04-26 16:35:17.513707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.748 qpair failed and we were unable to recover it. 00:24:08.748 [2024-04-26 16:35:17.523739] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.748 [2024-04-26 16:35:17.523777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.748 [2024-04-26 16:35:17.523793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.748 [2024-04-26 16:35:17.523802] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.748 [2024-04-26 16:35:17.523811] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.748 [2024-04-26 16:35:17.533802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.748 qpair failed and we were unable to recover it. 00:24:08.748 [2024-04-26 16:35:17.543785] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.748 [2024-04-26 16:35:17.543824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.748 [2024-04-26 16:35:17.543841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.748 [2024-04-26 16:35:17.543850] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.748 [2024-04-26 16:35:17.543859] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.554065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.563830] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.563874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.563890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.563899] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.563908] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.573932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.583864] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.583899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.583916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.583925] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.583934] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.594112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.603921] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.603958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.603974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.603984] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.603993] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.614043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.623859] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.623899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.623918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.623927] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.623936] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.634006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.644054] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.644099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.644116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.644126] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.644135] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.654093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.664010] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.664048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.664064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.664074] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.664083] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.674169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.684173] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.684209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.684226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.684235] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.684245] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.694386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.704090] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.704128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.704144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.704155] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.704167] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.714204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.724142] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.724182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.724198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.724208] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.724216] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.734392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.744256] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.744298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.744315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.744324] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.744333] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:08.749 [2024-04-26 16:35:17.754418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:08.749 qpair failed and we were unable to recover it. 00:24:08.749 [2024-04-26 16:35:17.764321] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:08.749 [2024-04-26 16:35:17.764362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:08.749 [2024-04-26 16:35:17.764379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:08.749 [2024-04-26 16:35:17.764389] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:08.749 [2024-04-26 16:35:17.764398] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.774523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.784397] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.784438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.784454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.784464] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.784473] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.794552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.804410] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.804459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.804475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.804485] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.804494] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.814603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.824543] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.824584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.824600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.824610] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.824619] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.834668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.844559] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.844592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.844609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.844618] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.844627] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.854727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.864615] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.864657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.864674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.864683] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.864692] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.874613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.884640] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.884679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.884695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.884708] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.884717] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.894900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.904643] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.904683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.904699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.904709] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.904718] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.914755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.924668] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.924705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.924721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.924731] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.924740] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.934970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.944888] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.944930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.944946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.944956] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.944965] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.955119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.964794] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.964835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.964852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.964861] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.008 [2024-04-26 16:35:17.964870] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.008 [2024-04-26 16:35:17.975116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.008 qpair failed and we were unable to recover it. 00:24:09.008 [2024-04-26 16:35:17.984985] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.008 [2024-04-26 16:35:17.985021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.008 [2024-04-26 16:35:17.985037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.008 [2024-04-26 16:35:17.985047] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.009 [2024-04-26 16:35:17.985056] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.009 [2024-04-26 16:35:17.994948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.009 qpair failed and we were unable to recover it. 00:24:09.009 [2024-04-26 16:35:18.004982] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.009 [2024-04-26 16:35:18.005017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.009 [2024-04-26 16:35:18.005034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.009 [2024-04-26 16:35:18.005044] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.009 [2024-04-26 16:35:18.005052] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.009 [2024-04-26 16:35:18.015167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.009 qpair failed and we were unable to recover it. 00:24:09.009 [2024-04-26 16:35:18.025103] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.009 [2024-04-26 16:35:18.025142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.009 [2024-04-26 16:35:18.025158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.009 [2024-04-26 16:35:18.025168] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.009 [2024-04-26 16:35:18.025176] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.267 [2024-04-26 16:35:18.035293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.267 qpair failed and we were unable to recover it. 00:24:09.267 [2024-04-26 16:35:18.045089] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.267 [2024-04-26 16:35:18.045129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.267 [2024-04-26 16:35:18.045145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.267 [2024-04-26 16:35:18.045155] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.267 [2024-04-26 16:35:18.045164] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.055409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.065230] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.065270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.065290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.065299] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.065308] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.075325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.085147] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.085184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.085200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.085210] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.085219] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.095481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.105301] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.105340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.105361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.105371] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.105380] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.115543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.125238] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.125276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.125292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.125301] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.125310] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.135422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.145307] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.145350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.145366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.145376] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.145388] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.155739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.165505] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.165545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.165562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.165571] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.165580] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.175412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.185518] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.185556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.185574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.185583] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.185592] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.195865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.205664] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.205701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.205717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.205727] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.205736] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.215934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.225637] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.225673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.225689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.225699] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.225708] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.235757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.245719] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.245761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.245777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.245787] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.245796] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.255889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.265777] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.265817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.265834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.265843] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.265852] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.268 [2024-04-26 16:35:18.275951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.268 qpair failed and we were unable to recover it. 00:24:09.268 [2024-04-26 16:35:18.285846] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.268 [2024-04-26 16:35:18.285884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.268 [2024-04-26 16:35:18.285901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.268 [2024-04-26 16:35:18.285911] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.268 [2024-04-26 16:35:18.285920] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.296151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.305947] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.305986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.306002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.306012] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.306020] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.316218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.326030] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.326067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.326083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.326096] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.326105] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.336217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.346106] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.346145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.346161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.346171] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.346180] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.356320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.366074] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.366114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.366130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.366139] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.366148] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.376245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.386219] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.386259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.386276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.386285] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.386294] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.396186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.406120] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.406156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.406172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.406181] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.406190] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.416351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.426253] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.426292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.426308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.426318] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.426327] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.436437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.446351] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.446394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.446411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.446420] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.446429] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.456701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.466369] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.466411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.466428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.527 [2024-04-26 16:35:18.466437] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.527 [2024-04-26 16:35:18.466446] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.527 [2024-04-26 16:35:18.476496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.527 qpair failed and we were unable to recover it. 00:24:09.527 [2024-04-26 16:35:18.486467] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.527 [2024-04-26 16:35:18.486509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.527 [2024-04-26 16:35:18.486526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.528 [2024-04-26 16:35:18.486536] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.528 [2024-04-26 16:35:18.486544] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.528 [2024-04-26 16:35:18.496547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.528 qpair failed and we were unable to recover it. 00:24:09.528 [2024-04-26 16:35:18.506620] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.528 [2024-04-26 16:35:18.506662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.528 [2024-04-26 16:35:18.506682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.528 [2024-04-26 16:35:18.506691] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.528 [2024-04-26 16:35:18.506700] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.528 [2024-04-26 16:35:18.516558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.528 qpair failed and we were unable to recover it. 00:24:09.528 [2024-04-26 16:35:18.526575] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.528 [2024-04-26 16:35:18.526613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.528 [2024-04-26 16:35:18.526630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.528 [2024-04-26 16:35:18.526640] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.528 [2024-04-26 16:35:18.526648] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.528 [2024-04-26 16:35:18.536765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.528 qpair failed and we were unable to recover it. 00:24:09.528 [2024-04-26 16:35:18.546652] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.528 [2024-04-26 16:35:18.546685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.528 [2024-04-26 16:35:18.546702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.528 [2024-04-26 16:35:18.546711] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.528 [2024-04-26 16:35:18.546720] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.786 [2024-04-26 16:35:18.556759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-26 16:35:18.566656] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.786 [2024-04-26 16:35:18.566693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.786 [2024-04-26 16:35:18.566709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.786 [2024-04-26 16:35:18.566718] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.786 [2024-04-26 16:35:18.566727] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.786 [2024-04-26 16:35:18.576933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-26 16:35:18.586757] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.786 [2024-04-26 16:35:18.586796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.786 [2024-04-26 16:35:18.586812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.786 [2024-04-26 16:35:18.586822] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.786 [2024-04-26 16:35:18.586833] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.786 [2024-04-26 16:35:18.596945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.786 qpair failed and we were unable to recover it. 00:24:09.786 [2024-04-26 16:35:18.606787] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.786 [2024-04-26 16:35:18.606832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.786 [2024-04-26 16:35:18.606849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.786 [2024-04-26 16:35:18.606858] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.786 [2024-04-26 16:35:18.606867] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.616897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.626791] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.626826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.626842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.626851] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.626860] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.637040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.647072] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.647112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.647129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.647138] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.647147] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.657130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.666968] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.667007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.667025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.667034] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.667043] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.677230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.686994] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.687040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.687057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.687066] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.687075] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.697213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.707069] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.707102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.707119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.707129] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.707138] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.717430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.727082] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.727116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.727133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.727143] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.727151] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.737348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.747206] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.747244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.747260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.747270] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.747279] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.757342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.767379] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.767416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.767433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.767445] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.767454] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.777471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.787337] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.787377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.787394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.787403] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.787412] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:09.787 [2024-04-26 16:35:18.797638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:09.787 qpair failed and we were unable to recover it. 00:24:09.787 [2024-04-26 16:35:18.807216] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:09.787 [2024-04-26 16:35:18.807252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:09.787 [2024-04-26 16:35:18.807269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:09.787 [2024-04-26 16:35:18.807278] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:09.787 [2024-04-26 16:35:18.807288] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.046 [2024-04-26 16:35:18.817598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.046 qpair failed and we were unable to recover it. 00:24:10.046 [2024-04-26 16:35:18.827489] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.046 [2024-04-26 16:35:18.827531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.046 [2024-04-26 16:35:18.827548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.046 [2024-04-26 16:35:18.827557] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.046 [2024-04-26 16:35:18.827566] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.046 [2024-04-26 16:35:18.837574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.046 qpair failed and we were unable to recover it. 00:24:10.046 [2024-04-26 16:35:18.847512] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.046 [2024-04-26 16:35:18.847553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.046 [2024-04-26 16:35:18.847570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.046 [2024-04-26 16:35:18.847580] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.046 [2024-04-26 16:35:18.847589] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.046 [2024-04-26 16:35:18.857769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.046 qpair failed and we were unable to recover it. 00:24:10.046 [2024-04-26 16:35:18.867548] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.046 [2024-04-26 16:35:18.867584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.046 [2024-04-26 16:35:18.867601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.046 [2024-04-26 16:35:18.867610] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.046 [2024-04-26 16:35:18.867620] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.046 [2024-04-26 16:35:18.877795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.046 qpair failed and we were unable to recover it. 00:24:10.046 [2024-04-26 16:35:18.887522] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.046 [2024-04-26 16:35:18.887562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.046 [2024-04-26 16:35:18.887578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.046 [2024-04-26 16:35:18.887587] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.046 [2024-04-26 16:35:18.887596] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.046 [2024-04-26 16:35:18.897733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.046 qpair failed and we were unable to recover it. 00:24:10.046 [2024-04-26 16:35:18.907634] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.046 [2024-04-26 16:35:18.907674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.046 [2024-04-26 16:35:18.907690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.046 [2024-04-26 16:35:18.907700] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.046 [2024-04-26 16:35:18.907709] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.046 [2024-04-26 16:35:18.917844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.046 qpair failed and we were unable to recover it. 00:24:10.047 [2024-04-26 16:35:18.927692] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.047 [2024-04-26 16:35:18.927732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.047 [2024-04-26 16:35:18.927748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.047 [2024-04-26 16:35:18.927758] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.047 [2024-04-26 16:35:18.927767] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.047 [2024-04-26 16:35:18.937947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.047 qpair failed and we were unable to recover it. 00:24:10.047 [2024-04-26 16:35:18.947700] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.047 [2024-04-26 16:35:18.947737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.047 [2024-04-26 16:35:18.947757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.047 [2024-04-26 16:35:18.947767] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.047 [2024-04-26 16:35:18.947776] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.047 [2024-04-26 16:35:18.957949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.047 qpair failed and we were unable to recover it. 00:24:10.047 [2024-04-26 16:35:18.967700] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.047 [2024-04-26 16:35:18.967736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.047 [2024-04-26 16:35:18.967752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.047 [2024-04-26 16:35:18.967762] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.047 [2024-04-26 16:35:18.967770] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.047 [2024-04-26 16:35:18.977761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.047 qpair failed and we were unable to recover it. 00:24:10.047 [2024-04-26 16:35:18.987894] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.047 [2024-04-26 16:35:18.987934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.047 [2024-04-26 16:35:18.987952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.047 [2024-04-26 16:35:18.987961] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.047 [2024-04-26 16:35:18.987970] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.047 [2024-04-26 16:35:18.998012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.047 qpair failed and we were unable to recover it. 00:24:10.047 [2024-04-26 16:35:19.008004] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.047 [2024-04-26 16:35:19.008045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.047 [2024-04-26 16:35:19.008062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.047 [2024-04-26 16:35:19.008072] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.047 [2024-04-26 16:35:19.008081] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.047 [2024-04-26 16:35:19.018134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.047 qpair failed and we were unable to recover it. 00:24:10.047 [2024-04-26 16:35:19.028026] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.047 [2024-04-26 16:35:19.028068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.047 [2024-04-26 16:35:19.028084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.047 [2024-04-26 16:35:19.028094] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.047 [2024-04-26 16:35:19.028109] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.047 [2024-04-26 16:35:19.038373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.047 qpair failed and we were unable to recover it. 00:24:10.047 [2024-04-26 16:35:19.047963] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.047 [2024-04-26 16:35:19.048006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.047 [2024-04-26 16:35:19.048022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.047 [2024-04-26 16:35:19.048032] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.047 [2024-04-26 16:35:19.048040] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.047 [2024-04-26 16:35:19.058214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.047 qpair failed and we were unable to recover it. 00:24:10.047 [2024-04-26 16:35:19.068206] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.047 [2024-04-26 16:35:19.068243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.047 [2024-04-26 16:35:19.068260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.047 [2024-04-26 16:35:19.068269] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.047 [2024-04-26 16:35:19.068278] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.305 [2024-04-26 16:35:19.078211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.305 qpair failed and we were unable to recover it. 00:24:10.305 [2024-04-26 16:35:19.088170] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.305 [2024-04-26 16:35:19.088211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.305 [2024-04-26 16:35:19.088228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.305 [2024-04-26 16:35:19.088237] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.305 [2024-04-26 16:35:19.088248] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.305 [2024-04-26 16:35:19.098230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.305 qpair failed and we were unable to recover it. 00:24:10.305 [2024-04-26 16:35:19.108134] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.305 [2024-04-26 16:35:19.108179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.305 [2024-04-26 16:35:19.108195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.305 [2024-04-26 16:35:19.108205] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.305 [2024-04-26 16:35:19.108214] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.305 [2024-04-26 16:35:19.118300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.305 qpair failed and we were unable to recover it. 00:24:10.305 [2024-04-26 16:35:19.128169] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.128213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.128230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.128240] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.128249] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.138364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.148431] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.148471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.148488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.148498] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.148507] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.158614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.168438] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.168485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.168502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.168511] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.168521] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.178520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.188519] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.188559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.188576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.188585] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.188594] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.198775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.208511] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.208549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.208566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.208579] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.208588] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.218634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.228588] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.228630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.228648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.228657] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.228666] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.238627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.248609] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.248649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.248666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.248676] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.248685] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.258861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.268744] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.268784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.268801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.268810] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.268819] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.278905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.288732] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.288770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.288787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.288797] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.288805] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.298715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.308918] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.308958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.308974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.308984] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.308992] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.306 [2024-04-26 16:35:19.318924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.306 qpair failed and we were unable to recover it. 00:24:10.306 [2024-04-26 16:35:19.328902] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.306 [2024-04-26 16:35:19.328940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.306 [2024-04-26 16:35:19.328958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.306 [2024-04-26 16:35:19.328967] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.306 [2024-04-26 16:35:19.328976] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.338939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.349009] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.349049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.349066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.349075] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.349084] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.359177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.368897] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.368940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.368956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.368966] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.368975] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.379026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.389089] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.389128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.389149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.389158] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.389167] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.399300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.409071] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.409110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.409127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.409136] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.409145] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.419228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.429243] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.429278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.429295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.429305] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.429313] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.439392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.449203] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.449241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.449258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.449267] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.449276] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.459479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.469264] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.469306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.469323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.469333] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.469350] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.479403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.489311] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.489359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.489375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.489385] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.489394] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.499535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.509405] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.509440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.509457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.509466] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.509475] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.519432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.529437] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.565 [2024-04-26 16:35:19.529474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.565 [2024-04-26 16:35:19.529490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.565 [2024-04-26 16:35:19.529500] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.565 [2024-04-26 16:35:19.529509] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.565 [2024-04-26 16:35:19.539734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.565 qpair failed and we were unable to recover it. 00:24:10.565 [2024-04-26 16:35:19.549572] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.566 [2024-04-26 16:35:19.549609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.566 [2024-04-26 16:35:19.549625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.566 [2024-04-26 16:35:19.549635] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.566 [2024-04-26 16:35:19.549644] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.566 [2024-04-26 16:35:19.559701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.566 qpair failed and we were unable to recover it. 00:24:10.566 [2024-04-26 16:35:19.569495] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.566 [2024-04-26 16:35:19.569546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.566 [2024-04-26 16:35:19.569562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.566 [2024-04-26 16:35:19.569572] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.566 [2024-04-26 16:35:19.569580] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.566 [2024-04-26 16:35:19.579727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.566 qpair failed and we were unable to recover it. 00:24:10.823 [2024-04-26 16:35:19.589575] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.823 [2024-04-26 16:35:19.589627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.823 [2024-04-26 16:35:19.589645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.823 [2024-04-26 16:35:19.589656] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.823 [2024-04-26 16:35:19.589665] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.823 [2024-04-26 16:35:19.599713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.823 qpair failed and we were unable to recover it. 00:24:10.823 [2024-04-26 16:35:19.609768] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.823 [2024-04-26 16:35:19.609800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.823 [2024-04-26 16:35:19.609816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.823 [2024-04-26 16:35:19.609826] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.823 [2024-04-26 16:35:19.609835] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.823 [2024-04-26 16:35:19.619699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.823 qpair failed and we were unable to recover it. 00:24:10.823 [2024-04-26 16:35:19.629712] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.823 [2024-04-26 16:35:19.629755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.823 [2024-04-26 16:35:19.629771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.823 [2024-04-26 16:35:19.629781] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.823 [2024-04-26 16:35:19.629790] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.823 [2024-04-26 16:35:19.639861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.823 qpair failed and we were unable to recover it. 00:24:10.823 [2024-04-26 16:35:19.649852] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.823 [2024-04-26 16:35:19.649895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.823 [2024-04-26 16:35:19.649912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.823 [2024-04-26 16:35:19.649924] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.823 [2024-04-26 16:35:19.649933] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.823 [2024-04-26 16:35:19.660027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.823 qpair failed and we were unable to recover it. 00:24:10.823 [2024-04-26 16:35:19.669885] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.823 [2024-04-26 16:35:19.669926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.823 [2024-04-26 16:35:19.669943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.823 [2024-04-26 16:35:19.669952] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.823 [2024-04-26 16:35:19.669961] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.823 [2024-04-26 16:35:19.680252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.823 qpair failed and we were unable to recover it. 00:24:10.823 [2024-04-26 16:35:19.689894] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.823 [2024-04-26 16:35:19.689931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.824 [2024-04-26 16:35:19.689949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.824 [2024-04-26 16:35:19.689958] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.824 [2024-04-26 16:35:19.689967] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.824 [2024-04-26 16:35:19.700063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.824 qpair failed and we were unable to recover it. 00:24:10.824 [2024-04-26 16:35:19.710008] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.824 [2024-04-26 16:35:19.710047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.824 [2024-04-26 16:35:19.710064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.824 [2024-04-26 16:35:19.710074] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.824 [2024-04-26 16:35:19.710083] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.824 [2024-04-26 16:35:19.720085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.824 qpair failed and we were unable to recover it. 00:24:10.824 [2024-04-26 16:35:19.729971] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.824 [2024-04-26 16:35:19.730015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.824 [2024-04-26 16:35:19.730031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.824 [2024-04-26 16:35:19.730041] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.824 [2024-04-26 16:35:19.730050] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.824 [2024-04-26 16:35:19.740261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.824 qpair failed and we were unable to recover it. 00:24:10.824 [2024-04-26 16:35:19.750048] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.824 [2024-04-26 16:35:19.750087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.824 [2024-04-26 16:35:19.750103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.824 [2024-04-26 16:35:19.750113] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.824 [2024-04-26 16:35:19.750122] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.824 [2024-04-26 16:35:19.760238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.824 qpair failed and we were unable to recover it. 00:24:10.824 [2024-04-26 16:35:19.770131] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.824 [2024-04-26 16:35:19.770163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.824 [2024-04-26 16:35:19.770180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.824 [2024-04-26 16:35:19.770189] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.824 [2024-04-26 16:35:19.770198] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.824 [2024-04-26 16:35:19.780128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.824 qpair failed and we were unable to recover it. 00:24:10.824 [2024-04-26 16:35:19.790178] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.824 [2024-04-26 16:35:19.790215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.824 [2024-04-26 16:35:19.790232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.824 [2024-04-26 16:35:19.790241] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.824 [2024-04-26 16:35:19.790250] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.824 [2024-04-26 16:35:19.800418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.824 qpair failed and we were unable to recover it. 00:24:10.824 [2024-04-26 16:35:19.810264] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.824 [2024-04-26 16:35:19.810304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.824 [2024-04-26 16:35:19.810320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.824 [2024-04-26 16:35:19.810330] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.824 [2024-04-26 16:35:19.810338] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.824 [2024-04-26 16:35:19.820190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.824 qpair failed and we were unable to recover it. 00:24:10.824 [2024-04-26 16:35:19.830314] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:10.824 [2024-04-26 16:35:19.830358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:10.824 [2024-04-26 16:35:19.830378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:10.824 [2024-04-26 16:35:19.830388] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:10.824 [2024-04-26 16:35:19.830396] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:10.824 [2024-04-26 16:35:19.840604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:10.824 qpair failed and we were unable to recover it. 00:24:11.082 [2024-04-26 16:35:19.850333] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.082 [2024-04-26 16:35:19.850373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.082 [2024-04-26 16:35:19.850389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.082 [2024-04-26 16:35:19.850399] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.082 [2024-04-26 16:35:19.850408] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:11.082 [2024-04-26 16:35:19.860398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.082 qpair failed and we were unable to recover it. 00:24:11.082 [2024-04-26 16:35:19.870422] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.082 [2024-04-26 16:35:19.870463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.082 [2024-04-26 16:35:19.870480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.082 [2024-04-26 16:35:19.870489] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.082 [2024-04-26 16:35:19.870498] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:11.082 [2024-04-26 16:35:19.880687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.082 qpair failed and we were unable to recover it. 00:24:11.082 [2024-04-26 16:35:19.890575] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.082 [2024-04-26 16:35:19.890613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.082 [2024-04-26 16:35:19.890630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.082 [2024-04-26 16:35:19.890639] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.082 [2024-04-26 16:35:19.890648] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:11.082 [2024-04-26 16:35:19.900846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.082 qpair failed and we were unable to recover it. 00:24:11.082 [2024-04-26 16:35:19.910600] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.082 [2024-04-26 16:35:19.910638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.082 [2024-04-26 16:35:19.910654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.082 [2024-04-26 16:35:19.910664] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.082 [2024-04-26 16:35:19.910676] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:11.082 [2024-04-26 16:35:19.920825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.082 qpair failed and we were unable to recover it. 00:24:11.082 [2024-04-26 16:35:19.930677] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.082 [2024-04-26 16:35:19.930714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.082 [2024-04-26 16:35:19.930731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.082 [2024-04-26 16:35:19.930741] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.082 [2024-04-26 16:35:19.930750] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:11.082 [2024-04-26 16:35:19.940930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.082 qpair failed and we were unable to recover it. 00:24:11.082 [2024-04-26 16:35:19.950641] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:11.082 [2024-04-26 16:35:19.950681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:11.082 [2024-04-26 16:35:19.950697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:11.082 [2024-04-26 16:35:19.950707] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:11.082 [2024-04-26 16:35:19.950716] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:11.082 [2024-04-26 16:35:19.960827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:11.082 qpair failed and we were unable to recover it. 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Write completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 Read completed with error (sct=0, sc=8) 00:24:12.015 starting I/O failed 00:24:12.015 [2024-04-26 16:35:20.965259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:12.015 [2024-04-26 16:35:20.973703] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.015 [2024-04-26 16:35:20.973758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.015 [2024-04-26 16:35:20.973780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.015 [2024-04-26 16:35:20.973792] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.015 [2024-04-26 16:35:20.973803] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:24:12.015 [2024-04-26 16:35:20.984025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:12.015 qpair failed and we were unable to recover it. 00:24:12.015 [2024-04-26 16:35:20.993818] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.015 [2024-04-26 16:35:20.993859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.015 [2024-04-26 16:35:20.993877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.015 [2024-04-26 16:35:20.993887] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.015 [2024-04-26 16:35:20.993896] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d03c0 00:24:12.015 [2024-04-26 16:35:21.004046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:12.015 qpair failed and we were unable to recover it. 00:24:12.015 [2024-04-26 16:35:21.013775] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.015 [2024-04-26 16:35:21.013812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.015 [2024-04-26 16:35:21.013834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.015 [2024-04-26 16:35:21.013845] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.015 [2024-04-26 16:35:21.013855] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:24:12.015 [2024-04-26 16:35:21.024129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:12.015 qpair failed and we were unable to recover it. 00:24:12.015 [2024-04-26 16:35:21.033837] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.015 [2024-04-26 16:35:21.033880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.015 [2024-04-26 16:35:21.033897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.015 [2024-04-26 16:35:21.033907] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.015 [2024-04-26 16:35:21.033916] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:24:12.273 [2024-04-26 16:35:21.044224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:12.273 qpair failed and we were unable to recover it. 00:24:12.273 [2024-04-26 16:35:21.044366] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:12.273 A controller has encountered a failure and is being reset. 00:24:12.273 [2024-04-26 16:35:21.054038] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.273 [2024-04-26 16:35:21.054084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.273 [2024-04-26 16:35:21.054112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.273 [2024-04-26 16:35:21.054127] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.273 [2024-04-26 16:35:21.054141] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:24:12.273 [2024-04-26 16:35:21.064223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.273 qpair failed and we were unable to recover it. 00:24:12.273 [2024-04-26 16:35:21.074067] ctrlr.c: 720:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:12.273 [2024-04-26 16:35:21.074109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:12.273 [2024-04-26 16:35:21.074126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:12.273 [2024-04-26 16:35:21.074136] nvme_rdma.c:1408:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:12.273 [2024-04-26 16:35:21.074144] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:24:12.273 [2024-04-26 16:35:21.084035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:12.273 qpair failed and we were unable to recover it. 00:24:12.273 [2024-04-26 16:35:21.084171] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:12.273 [2024-04-26 16:35:21.116254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:12.273 Controller properly reset. 00:24:12.273 Initializing NVMe Controllers 00:24:12.273 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.273 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:12.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:12.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:12.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:12.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:12.273 Initialization complete. Launching workers. 00:24:12.273 Starting thread on core 1 00:24:12.273 Starting thread on core 2 00:24:12.273 Starting thread on core 3 00:24:12.273 Starting thread on core 0 00:24:12.273 16:35:21 -- host/target_disconnect.sh@59 -- # sync 00:24:12.273 00:24:12.273 real 0m12.594s 00:24:12.273 user 0m27.198s 00:24:12.273 sys 0m3.348s 00:24:12.273 16:35:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:12.273 16:35:21 -- common/autotest_common.sh@10 -- # set +x 00:24:12.273 ************************************ 00:24:12.273 END TEST nvmf_target_disconnect_tc2 00:24:12.273 ************************************ 00:24:12.273 16:35:21 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:24:12.273 16:35:21 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:12.273 16:35:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:12.273 16:35:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:12.273 16:35:21 -- common/autotest_common.sh@10 -- # set +x 00:24:12.530 ************************************ 00:24:12.530 START TEST nvmf_target_disconnect_tc3 00:24:12.530 ************************************ 00:24:12.530 16:35:21 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc3 00:24:12.530 16:35:21 -- host/target_disconnect.sh@65 -- # reconnectpid=575033 00:24:12.530 16:35:21 -- host/target_disconnect.sh@67 -- # sleep 2 00:24:12.530 16:35:21 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:12.530 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.435 16:35:23 -- host/target_disconnect.sh@68 -- # kill -9 574037 00:24:14.435 16:35:23 -- host/target_disconnect.sh@70 -- # sleep 2 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Write completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 Read completed with error (sct=0, sc=8) 00:24:15.809 starting I/O failed 00:24:15.809 [2024-04-26 16:35:24.560692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:16.376 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 574037 Killed "${NVMF_APP[@]}" "$@" 00:24:16.376 16:35:25 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:24:16.376 16:35:25 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:16.376 16:35:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:16.376 16:35:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:16.376 16:35:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.376 16:35:25 -- nvmf/common.sh@470 -- # nvmfpid=575585 00:24:16.376 16:35:25 -- nvmf/common.sh@471 -- # waitforlisten 575585 00:24:16.376 16:35:25 -- nvmf/common.sh@469 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:16.376 16:35:25 -- common/autotest_common.sh@817 -- # '[' -z 575585 ']' 00:24:16.376 16:35:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.376 16:35:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:16.376 16:35:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.376 16:35:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:16.376 16:35:25 -- common/autotest_common.sh@10 -- # set +x 00:24:16.635 [2024-04-26 16:35:25.441674] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:24:16.635 [2024-04-26 16:35:25.441728] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.635 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.635 [2024-04-26 16:35:25.527525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Write completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 Read completed with error (sct=0, sc=8) 00:24:16.635 starting I/O failed 00:24:16.635 [2024-04-26 16:35:25.565242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:16.635 [2024-04-26 16:35:25.603292] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.635 [2024-04-26 16:35:25.603339] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.635 [2024-04-26 16:35:25.603353] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.635 [2024-04-26 16:35:25.603362] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.635 [2024-04-26 16:35:25.603369] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.635 [2024-04-26 16:35:25.603491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:16.635 [2024-04-26 16:35:25.603593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:16.635 [2024-04-26 16:35:25.603690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:16.635 [2024-04-26 16:35:25.603692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:24:17.575 16:35:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:17.575 16:35:26 -- common/autotest_common.sh@850 -- # return 0 00:24:17.575 16:35:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:17.575 16:35:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:17.575 16:35:26 -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 16:35:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.575 16:35:26 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.575 16:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.575 16:35:26 -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 Malloc0 00:24:17.575 16:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.575 16:35:26 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:17.575 16:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.575 16:35:26 -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 [2024-04-26 16:35:26.349828] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a513f0/0x1a5d000) succeed. 00:24:17.575 [2024-04-26 16:35:26.360568] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a52a30/0x1afd090) succeed. 00:24:17.575 16:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.575 16:35:26 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.575 16:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.575 16:35:26 -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 16:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.575 16:35:26 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.575 16:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.575 16:35:26 -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 16:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.575 16:35:26 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:17.575 16:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.575 16:35:26 -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 [2024-04-26 16:35:26.510253] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:17.575 16:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.575 16:35:26 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:17.575 16:35:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.575 16:35:26 -- common/autotest_common.sh@10 -- # set +x 00:24:17.575 16:35:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.576 16:35:26 -- host/target_disconnect.sh@73 -- # wait 575033 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Write completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 Read completed with error (sct=0, sc=8) 00:24:17.576 starting I/O failed 00:24:17.576 [2024-04-26 16:35:26.569666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:17.576 [2024-04-26 16:35:26.570838] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:17.576 [2024-04-26 16:35:26.570859] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:17.576 [2024-04-26 16:35:26.570868] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:18.954 [2024-04-26 16:35:27.574117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:18.954 qpair failed and we were unable to recover it. 00:24:18.954 [2024-04-26 16:35:27.575076] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:18.954 [2024-04-26 16:35:27.575093] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:18.954 [2024-04-26 16:35:27.575102] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:19.889 [2024-04-26 16:35:28.578294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.889 qpair failed and we were unable to recover it. 00:24:19.889 [2024-04-26 16:35:28.579413] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:19.889 [2024-04-26 16:35:28.579429] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:19.889 [2024-04-26 16:35:28.579438] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:20.824 [2024-04-26 16:35:29.582804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.824 qpair failed and we were unable to recover it. 00:24:20.824 [2024-04-26 16:35:29.583886] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:20.824 [2024-04-26 16:35:29.583903] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:20.824 [2024-04-26 16:35:29.583911] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:21.757 [2024-04-26 16:35:30.587128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.757 qpair failed and we were unable to recover it. 00:24:21.757 [2024-04-26 16:35:30.588245] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:21.757 [2024-04-26 16:35:30.588263] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:21.757 [2024-04-26 16:35:30.588271] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:22.690 [2024-04-26 16:35:31.591699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.690 qpair failed and we were unable to recover it. 00:24:22.690 [2024-04-26 16:35:31.592843] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:22.690 [2024-04-26 16:35:31.592860] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:22.690 [2024-04-26 16:35:31.592868] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:23.622 [2024-04-26 16:35:32.596039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.622 qpair failed and we were unable to recover it. 00:24:23.622 [2024-04-26 16:35:32.597133] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:23.622 [2024-04-26 16:35:32.597150] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:23.622 [2024-04-26 16:35:32.597158] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:24:24.996 [2024-04-26 16:35:33.600585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.996 qpair failed and we were unable to recover it. 00:24:24.996 [2024-04-26 16:35:33.601840] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:24.996 [2024-04-26 16:35:33.601864] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:24.996 [2024-04-26 16:35:33.601872] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:24:25.931 [2024-04-26 16:35:34.605138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:25.931 qpair failed and we were unable to recover it. 00:24:25.931 [2024-04-26 16:35:34.606178] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:25.931 [2024-04-26 16:35:34.606196] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:25.931 [2024-04-26 16:35:34.606205] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:24:26.865 [2024-04-26 16:35:35.609470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:26.866 qpair failed and we were unable to recover it. 00:24:26.866 [2024-04-26 16:35:35.609585] nvme_ctrlr.c:4340:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:26.866 A controller has encountered a failure and is being reset. 00:24:26.866 Resorting to new failover address 192.168.100.9 00:24:26.866 [2024-04-26 16:35:35.610926] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:26.866 [2024-04-26 16:35:35.610955] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:26.866 [2024-04-26 16:35:35.610968] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:24:27.801 [2024-04-26 16:35:36.614132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:27.801 qpair failed and we were unable to recover it. 00:24:27.801 [2024-04-26 16:35:36.615227] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:27.801 [2024-04-26 16:35:36.615244] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:27.801 [2024-04-26 16:35:36.615254] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:24:28.736 [2024-04-26 16:35:37.618409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.736 qpair failed and we were unable to recover it. 00:24:28.736 [2024-04-26 16:35:37.618518] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.736 [2024-04-26 16:35:37.618628] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:28.736 [2024-04-26 16:35:37.620071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:28.736 Controller properly reset. 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Read completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 Write completed with error (sct=0, sc=8) 00:24:29.670 starting I/O failed 00:24:29.670 [2024-04-26 16:35:38.662897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:29.930 Initializing NVMe Controllers 00:24:29.930 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.930 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.930 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:29.930 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:29.930 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:29.930 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:29.930 Initialization complete. Launching workers. 00:24:29.930 Starting thread on core 1 00:24:29.930 Starting thread on core 2 00:24:29.930 Starting thread on core 3 00:24:29.930 Starting thread on core 0 00:24:29.930 16:35:38 -- host/target_disconnect.sh@74 -- # sync 00:24:29.930 00:24:29.930 real 0m17.335s 00:24:29.930 user 0m59.972s 00:24:29.930 sys 0m5.888s 00:24:29.930 16:35:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:29.930 16:35:38 -- common/autotest_common.sh@10 -- # set +x 00:24:29.930 ************************************ 00:24:29.930 END TEST nvmf_target_disconnect_tc3 00:24:29.930 ************************************ 00:24:29.930 16:35:38 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:29.930 16:35:38 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:24:29.930 16:35:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:29.930 16:35:38 -- nvmf/common.sh@117 -- # sync 00:24:29.930 16:35:38 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:29.930 16:35:38 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:29.930 16:35:38 -- nvmf/common.sh@120 -- # set +e 00:24:29.930 16:35:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:29.930 16:35:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:29.930 rmmod nvme_rdma 00:24:29.930 rmmod nvme_fabrics 00:24:29.930 16:35:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:29.930 16:35:38 -- nvmf/common.sh@124 -- # set -e 00:24:29.930 16:35:38 -- nvmf/common.sh@125 -- # return 0 00:24:29.930 16:35:38 -- nvmf/common.sh@478 -- # '[' -n 575585 ']' 00:24:29.930 16:35:38 -- nvmf/common.sh@479 -- # killprocess 575585 00:24:29.930 16:35:38 -- common/autotest_common.sh@936 -- # '[' -z 575585 ']' 00:24:29.930 16:35:38 -- common/autotest_common.sh@940 -- # kill -0 575585 00:24:29.930 16:35:38 -- common/autotest_common.sh@941 -- # uname 00:24:29.930 16:35:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:29.930 16:35:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 575585 00:24:29.930 16:35:38 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:24:29.930 16:35:38 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:24:29.930 16:35:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 575585' 00:24:29.930 killing process with pid 575585 00:24:29.930 16:35:38 -- common/autotest_common.sh@955 -- # kill 575585 00:24:29.930 16:35:38 -- common/autotest_common.sh@960 -- # wait 575585 00:24:29.930 [2024-04-26 16:35:38.949067] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:24:30.190 16:35:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:30.190 16:35:39 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:24:30.190 00:24:30.190 real 0m38.133s 00:24:30.190 user 2m23.673s 00:24:30.190 sys 0m14.681s 00:24:30.190 16:35:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:30.190 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:30.190 ************************************ 00:24:30.190 END TEST nvmf_target_disconnect 00:24:30.190 ************************************ 00:24:30.449 16:35:39 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:24:30.449 16:35:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.449 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:30.449 16:35:39 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:24:30.449 00:24:30.449 real 17m19.203s 00:24:30.449 user 45m36.010s 00:24:30.449 sys 4m43.796s 00:24:30.449 16:35:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:30.449 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:30.449 ************************************ 00:24:30.449 END TEST nvmf_rdma 00:24:30.449 ************************************ 00:24:30.449 16:35:39 -- spdk/autotest.sh@283 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:30.449 16:35:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:30.449 16:35:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:30.449 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:30.449 ************************************ 00:24:30.449 START TEST spdkcli_nvmf_rdma 00:24:30.449 ************************************ 00:24:30.449 16:35:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:30.708 * Looking for test storage... 00:24:30.708 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:30.708 16:35:39 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:30.708 16:35:39 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:30.708 16:35:39 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:30.708 16:35:39 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.708 16:35:39 -- nvmf/common.sh@7 -- # uname -s 00:24:30.708 16:35:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.708 16:35:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.708 16:35:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.708 16:35:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.708 16:35:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.708 16:35:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.708 16:35:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.708 16:35:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.708 16:35:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.708 16:35:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.708 16:35:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:24:30.708 16:35:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:24:30.708 16:35:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.708 16:35:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.708 16:35:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.708 16:35:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.708 16:35:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:30.708 16:35:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.708 16:35:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.708 16:35:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.708 16:35:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.708 16:35:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.708 16:35:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.708 16:35:39 -- paths/export.sh@5 -- # export PATH 00:24:30.708 16:35:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.708 16:35:39 -- nvmf/common.sh@47 -- # : 0 00:24:30.708 16:35:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.708 16:35:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.708 16:35:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.708 16:35:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.708 16:35:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.708 16:35:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.708 16:35:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.708 16:35:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.708 16:35:39 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:30.708 16:35:39 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:30.708 16:35:39 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:30.708 16:35:39 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:30.708 16:35:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:30.708 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:30.708 16:35:39 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:30.708 16:35:39 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=577516 00:24:30.708 16:35:39 -- spdkcli/common.sh@34 -- # waitforlisten 577516 00:24:30.708 16:35:39 -- common/autotest_common.sh@817 -- # '[' -z 577516 ']' 00:24:30.708 16:35:39 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:30.708 16:35:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.708 16:35:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:30.708 16:35:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.708 16:35:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:30.708 16:35:39 -- common/autotest_common.sh@10 -- # set +x 00:24:30.708 [2024-04-26 16:35:39.641276] Starting SPDK v24.05-pre git sha1 bba4d07b0 / DPDK 23.11.0 initialization... 00:24:30.708 [2024-04-26 16:35:39.641332] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid577516 ] 00:24:30.708 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.708 [2024-04-26 16:35:39.713074] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:30.966 [2024-04-26 16:35:39.792039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.966 [2024-04-26 16:35:39.792042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.533 16:35:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:31.533 16:35:40 -- common/autotest_common.sh@850 -- # return 0 00:24:31.533 16:35:40 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:31.533 16:35:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:31.533 16:35:40 -- common/autotest_common.sh@10 -- # set +x 00:24:31.533 16:35:40 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:31.533 16:35:40 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:31.533 16:35:40 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:31.533 16:35:40 -- nvmf/common.sh@430 -- # '[' -z rdma ']' 00:24:31.533 16:35:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.533 16:35:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:31.533 16:35:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:31.533 16:35:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:31.533 16:35:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.533 16:35:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:31.533 16:35:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.533 16:35:40 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:31.533 16:35:40 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:31.533 16:35:40 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.533 16:35:40 -- common/autotest_common.sh@10 -- # set +x 00:24:38.096 16:35:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:38.096 16:35:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.096 16:35:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.096 16:35:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.096 16:35:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.096 16:35:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.096 16:35:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.096 16:35:46 -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.096 16:35:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.096 16:35:46 -- nvmf/common.sh@296 -- # e810=() 00:24:38.096 16:35:46 -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.096 16:35:46 -- nvmf/common.sh@297 -- # x722=() 00:24:38.096 16:35:46 -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.096 16:35:46 -- nvmf/common.sh@298 -- # mlx=() 00:24:38.096 16:35:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.096 16:35:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.096 16:35:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.096 16:35:46 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:38.096 16:35:46 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:38.096 16:35:46 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:38.096 16:35:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.096 16:35:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:24:38.096 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:24:38.096 16:35:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:38.096 16:35:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:24:38.096 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:24:38.096 16:35:46 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@350 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@351 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:38.096 16:35:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.096 16:35:46 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.096 16:35:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:38.096 16:35:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.096 16:35:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:24:38.096 Found net devices under 0000:18:00.0: mlx_0_0 00:24:38.096 16:35:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.096 16:35:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.096 16:35:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:38.096 16:35:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.096 16:35:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:24:38.096 Found net devices under 0000:18:00.1: mlx_0_1 00:24:38.096 16:35:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.096 16:35:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:38.096 16:35:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:38.096 16:35:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@406 -- # [[ rdma == tcp ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@408 -- # [[ rdma == rdma ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@409 -- # rdma_device_init 00:24:38.096 16:35:46 -- nvmf/common.sh@490 -- # load_ib_rdma_modules 00:24:38.096 16:35:46 -- nvmf/common.sh@58 -- # uname 00:24:38.096 16:35:46 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:38.096 16:35:46 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:38.096 16:35:46 -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:38.096 16:35:46 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:38.096 16:35:46 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:38.096 16:35:46 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:38.096 16:35:46 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:38.096 16:35:46 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:38.096 16:35:46 -- nvmf/common.sh@491 -- # allocate_nic_ips 00:24:38.096 16:35:46 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:38.096 16:35:46 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:38.096 16:35:46 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:38.096 16:35:46 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:38.096 16:35:46 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:38.096 16:35:46 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:38.096 16:35:46 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:38.096 16:35:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:38.096 16:35:46 -- nvmf/common.sh@105 -- # continue 2 00:24:38.096 16:35:46 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:38.096 16:35:46 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:38.096 16:35:46 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:38.096 16:35:46 -- nvmf/common.sh@105 -- # continue 2 00:24:38.096 16:35:46 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:38.096 16:35:46 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:38.096 16:35:46 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:38.096 16:35:46 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:38.096 16:35:46 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:38.096 16:35:46 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:38.096 16:35:47 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:38.096 16:35:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:38.096 16:35:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:38.096 15: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:38.096 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:24:38.096 altname enp24s0f0np0 00:24:38.096 altname ens785f0np0 00:24:38.096 inet 192.168.100.8/24 scope global mlx_0_0 00:24:38.096 valid_lft forever preferred_lft forever 00:24:38.096 16:35:47 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:38.096 16:35:47 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:38.096 16:35:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:38.096 16:35:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:38.096 16:35:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:38.096 16:35:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:38.096 16:35:47 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:38.096 16:35:47 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:38.096 16:35:47 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:38.096 16: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:38.096 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:24:38.096 altname enp24s0f1np1 00:24:38.096 altname ens785f1np1 00:24:38.096 inet 192.168.100.9/24 scope global mlx_0_1 00:24:38.096 valid_lft forever preferred_lft forever 00:24:38.096 16:35:47 -- nvmf/common.sh@411 -- # return 0 00:24:38.096 16:35:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:38.096 16:35:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:38.096 16:35:47 -- nvmf/common.sh@444 -- # [[ rdma == \r\d\m\a ]] 00:24:38.096 16:35:47 -- nvmf/common.sh@445 -- # get_available_rdma_ips 00:24:38.096 16:35:47 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:38.096 16:35:47 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:38.096 16:35:47 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:38.096 16:35:47 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:38.096 16:35:47 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:38.096 16:35:47 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:38.096 16:35:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:38.096 16:35:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:38.096 16:35:47 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:38.096 16:35:47 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:38.096 16:35:47 -- nvmf/common.sh@105 -- # continue 2 00:24:38.096 16:35:47 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:38.096 16:35:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:38.096 16:35:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:38.096 16:35:47 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:38.096 16:35:47 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:38.097 16:35:47 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:38.097 16:35:47 -- nvmf/common.sh@105 -- # continue 2 00:24:38.097 16:35:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:38.097 16:35:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:38.097 16:35:47 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:38.097 16:35:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:38.097 16:35:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:38.097 16:35:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:38.097 16:35:47 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:38.097 16:35:47 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:38.097 16:35:47 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:38.097 16:35:47 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:38.097 16:35:47 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:38.097 16:35:47 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:38.097 16:35:47 -- nvmf/common.sh@445 -- # RDMA_IP_LIST='192.168.100.8 00:24:38.097 192.168.100.9' 00:24:38.097 16:35:47 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:38.097 192.168.100.9' 00:24:38.097 16:35:47 -- nvmf/common.sh@446 -- # head -n 1 00:24:38.097 16:35:47 -- nvmf/common.sh@446 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:38.097 16:35:47 -- nvmf/common.sh@447 -- # echo '192.168.100.8 00:24:38.097 192.168.100.9' 00:24:38.097 16:35:47 -- nvmf/common.sh@447 -- # tail -n +2 00:24:38.097 16:35:47 -- nvmf/common.sh@447 -- # head -n 1 00:24:38.097 16:35:47 -- nvmf/common.sh@447 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:38.097 16:35:47 -- nvmf/common.sh@448 -- # '[' -z 192.168.100.8 ']' 00:24:38.097 16:35:47 -- nvmf/common.sh@452 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:38.097 16:35:47 -- nvmf/common.sh@457 -- # '[' rdma == tcp ']' 00:24:38.097 16:35:47 -- nvmf/common.sh@457 -- # '[' rdma == rdma ']' 00:24:38.097 16:35:47 -- nvmf/common.sh@463 -- # modprobe nvme-rdma 00:24:38.355 16:35:47 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:38.355 16:35:47 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:38.355 16:35:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:38.355 16:35:47 -- common/autotest_common.sh@10 -- # set +x 00:24:38.355 16:35:47 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:38.355 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:38.355 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:38.355 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:38.355 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:38.355 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:38.355 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:38.355 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:38.355 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:38.355 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:38.355 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:38.355 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:38.355 ' 00:24:38.611 [2024-04-26 16:35:47.494167] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:41.141 [2024-04-26 16:35:49.576673] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd44790/0xd55c30) succeed. 00:24:41.141 [2024-04-26 16:35:49.586962] rdma.c:2562:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd45c40/0xd972c0) succeed. 00:24:42.075 [2024-04-26 16:35:50.853218] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:24:44.605 [2024-04-26 16:35:53.188469] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:24:46.503 [2024-04-26 16:35:55.243153] rdma.c:3018:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:24:47.878 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:47.878 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:47.878 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:47.878 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:47.878 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:47.878 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:47.878 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:47.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:47.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:47.878 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:47.878 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:47.878 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:48.136 16:35:56 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:48.136 16:35:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:48.136 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:24:48.136 16:35:56 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:48.136 16:35:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:48.136 16:35:56 -- common/autotest_common.sh@10 -- # set +x 00:24:48.136 16:35:56 -- spdkcli/nvmf.sh@69 -- # check_match 00:24:48.136 16:35:56 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:48.394 16:35:57 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:48.394 16:35:57 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:48.394 16:35:57 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:48.394 16:35:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:48.394 16:35:57 -- common/autotest_common.sh@10 -- # set +x 00:24:48.394 16:35:57 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:48.394 16:35:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:48.394 16:35:57 -- common/autotest_common.sh@10 -- # set +x 00:24:48.394 16:35:57 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:48.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:48.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:48.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:48.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:24:48.394 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:24:48.394 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:48.394 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:48.394 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:48.394 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:48.394 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:48.394 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:48.394 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:48.394 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:48.394 ' 00:24:53.655 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:24:53.655 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:24:53.655 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:53.655 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:24:53.655 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:24:53.655 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:24:53.655 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:24:53.655 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:24:53.655 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:24:53.655 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:24:53.655 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:24:53.655 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:24:53.655 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:24:53.655 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:24:53.655 16:36:02 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:24:53.655 16:36:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:53.655 16:36:02 -- common/autotest_common.sh@10 -- # set +x 00:24:53.655 16:36:02 -- spdkcli/nvmf.sh@90 -- # killprocess 577516 00:24:53.655 16:36:02 -- common/autotest_common.sh@936 -- # '[' -z 577516 ']' 00:24:53.655 16:36:02 -- common/autotest_common.sh@940 -- # kill -0 577516 00:24:53.655 16:36:02 -- common/autotest_common.sh@941 -- # uname 00:24:53.655 16:36:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:53.655 16:36:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 577516 00:24:53.655 16:36:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:53.655 16:36:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:53.655 16:36:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 577516' 00:24:53.655 killing process with pid 577516 00:24:53.655 16:36:02 -- common/autotest_common.sh@955 -- # kill 577516 00:24:53.655 [2024-04-26 16:36:02.454956] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:53.655 16:36:02 -- common/autotest_common.sh@960 -- # wait 577516 00:24:53.655 [2024-04-26 16:36:02.509335] rdma.c:2871:nvmf_rdma_destroy: *ERROR*: transport wr pool count is 4095 but should be 2048 00:24:53.914 16:36:02 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:24:53.914 16:36:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:53.914 16:36:02 -- nvmf/common.sh@117 -- # sync 00:24:53.914 16:36:02 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:53.914 16:36:02 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:53.914 16:36:02 -- nvmf/common.sh@120 -- # set +e 00:24:53.914 16:36:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:53.914 16:36:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:53.914 rmmod nvme_rdma 00:24:53.914 rmmod nvme_fabrics 00:24:53.914 16:36:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:53.914 16:36:02 -- nvmf/common.sh@124 -- # set -e 00:24:53.914 16:36:02 -- nvmf/common.sh@125 -- # return 0 00:24:53.914 16:36:02 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:24:53.914 16:36:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:53.914 16:36:02 -- nvmf/common.sh@484 -- # [[ rdma == \t\c\p ]] 00:24:53.914 00:24:53.914 real 0m23.296s 00:24:53.914 user 0m50.000s 00:24:53.914 sys 0m6.048s 00:24:53.914 16:36:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:53.914 16:36:02 -- common/autotest_common.sh@10 -- # set +x 00:24:53.914 ************************************ 00:24:53.914 END TEST spdkcli_nvmf_rdma 00:24:53.914 ************************************ 00:24:53.914 16:36:02 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:24:53.914 16:36:02 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:24:53.914 16:36:02 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:24:53.914 16:36:02 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:24:53.914 16:36:02 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:24:53.914 16:36:02 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:24:53.914 16:36:02 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:24:53.914 16:36:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:53.914 16:36:02 -- common/autotest_common.sh@10 -- # set +x 00:24:53.914 16:36:02 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:24:53.914 16:36:02 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:24:53.914 16:36:02 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:24:53.914 16:36:02 -- common/autotest_common.sh@10 -- # set +x 00:24:58.123 INFO: APP EXITING 00:24:58.123 INFO: killing all VMs 00:24:58.123 INFO: killing vhost app 00:24:58.123 WARN: no vhost pid file found 00:24:58.123 INFO: EXIT DONE 00:25:01.485 Waiting for block devices as requested 00:25:01.485 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:25:01.485 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:25:01.485 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:01.485 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:01.485 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:01.485 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:01.485 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:01.744 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:01.744 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:01.744 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:02.003 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:25:02.003 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:02.003 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:02.262 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:02.262 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:02.262 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:02.521 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:02.521 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:02.521 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:06.736 Cleaning 00:25:06.736 Removing: /var/run/dpdk/spdk0/config 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:06.736 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:06.736 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:06.736 Removing: /var/run/dpdk/spdk1/config 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:06.736 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:06.736 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:06.736 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:06.736 Removing: /var/run/dpdk/spdk2/config 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:06.736 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:06.736 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:06.736 Removing: /var/run/dpdk/spdk3/config 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:06.736 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:06.736 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:06.736 Removing: /var/run/dpdk/spdk4/config 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:06.736 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:06.736 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:06.736 Removing: /dev/shm/bdevperf_trace.pid427559 00:25:06.736 Removing: /dev/shm/bdevperf_trace.pid512838 00:25:06.736 Removing: /dev/shm/bdev_svc_trace.1 00:25:06.736 Removing: /dev/shm/nvmf_trace.0 00:25:06.736 Removing: /dev/shm/spdk_tgt_trace.pid326418 00:25:06.736 Removing: /var/run/dpdk/spdk0 00:25:06.736 Removing: /var/run/dpdk/spdk1 00:25:06.736 Removing: /var/run/dpdk/spdk2 00:25:06.736 Removing: /var/run/dpdk/spdk3 00:25:06.736 Removing: /var/run/dpdk/spdk4 00:25:06.736 Removing: /var/run/dpdk/spdk_pid325714 00:25:06.736 Removing: /var/run/dpdk/spdk_pid326418 00:25:06.736 Removing: /var/run/dpdk/spdk_pid327015 00:25:06.736 Removing: /var/run/dpdk/spdk_pid327787 00:25:06.736 Removing: /var/run/dpdk/spdk_pid328001 00:25:06.736 Removing: /var/run/dpdk/spdk_pid328931 00:25:06.736 Removing: /var/run/dpdk/spdk_pid328955 00:25:06.736 Removing: /var/run/dpdk/spdk_pid329269 00:25:06.736 Removing: /var/run/dpdk/spdk_pid333081 00:25:06.736 Removing: /var/run/dpdk/spdk_pid333842 00:25:06.736 Removing: /var/run/dpdk/spdk_pid334087 00:25:06.736 Removing: /var/run/dpdk/spdk_pid334348 00:25:06.736 Removing: /var/run/dpdk/spdk_pid334762 00:25:06.736 Removing: /var/run/dpdk/spdk_pid335026 00:25:06.736 Removing: /var/run/dpdk/spdk_pid335238 00:25:06.736 Removing: /var/run/dpdk/spdk_pid335450 00:25:06.736 Removing: /var/run/dpdk/spdk_pid335683 00:25:06.736 Removing: /var/run/dpdk/spdk_pid336477 00:25:06.736 Removing: /var/run/dpdk/spdk_pid338935 00:25:06.736 Removing: /var/run/dpdk/spdk_pid339321 00:25:06.736 Removing: /var/run/dpdk/spdk_pid339552 00:25:06.736 Removing: /var/run/dpdk/spdk_pid339734 00:25:06.736 Removing: /var/run/dpdk/spdk_pid340173 00:25:06.736 Removing: /var/run/dpdk/spdk_pid340324 00:25:06.736 Removing: /var/run/dpdk/spdk_pid340736 00:25:06.736 Removing: /var/run/dpdk/spdk_pid340919 00:25:06.736 Removing: /var/run/dpdk/spdk_pid341135 00:25:06.736 Removing: /var/run/dpdk/spdk_pid341319 00:25:06.736 Removing: /var/run/dpdk/spdk_pid341541 00:25:06.736 Removing: /var/run/dpdk/spdk_pid341558 00:25:06.736 Removing: /var/run/dpdk/spdk_pid342035 00:25:06.736 Removing: /var/run/dpdk/spdk_pid342256 00:25:06.736 Removing: /var/run/dpdk/spdk_pid342632 00:25:06.736 Removing: /var/run/dpdk/spdk_pid342899 00:25:06.736 Removing: /var/run/dpdk/spdk_pid342931 00:25:06.736 Removing: /var/run/dpdk/spdk_pid343186 00:25:06.736 Removing: /var/run/dpdk/spdk_pid343401 00:25:06.736 Removing: /var/run/dpdk/spdk_pid343622 00:25:06.736 Removing: /var/run/dpdk/spdk_pid343943 00:25:06.736 Removing: /var/run/dpdk/spdk_pid344195 00:25:06.736 Removing: /var/run/dpdk/spdk_pid344407 00:25:06.736 Removing: /var/run/dpdk/spdk_pid344616 00:25:06.736 Removing: /var/run/dpdk/spdk_pid344824 00:25:06.736 Removing: /var/run/dpdk/spdk_pid345039 00:25:06.736 Removing: /var/run/dpdk/spdk_pid345274 00:25:06.736 Removing: /var/run/dpdk/spdk_pid345569 00:25:06.736 Removing: /var/run/dpdk/spdk_pid345883 00:25:06.736 Removing: /var/run/dpdk/spdk_pid346170 00:25:06.736 Removing: /var/run/dpdk/spdk_pid346377 00:25:06.736 Removing: /var/run/dpdk/spdk_pid346868 00:25:06.736 Removing: /var/run/dpdk/spdk_pid347194 00:25:06.736 Removing: /var/run/dpdk/spdk_pid347486 00:25:06.736 Removing: /var/run/dpdk/spdk_pid347775 00:25:06.736 Removing: /var/run/dpdk/spdk_pid347990 00:25:06.736 Removing: /var/run/dpdk/spdk_pid348201 00:25:06.736 Removing: /var/run/dpdk/spdk_pid348409 00:25:06.736 Removing: /var/run/dpdk/spdk_pid348654 00:25:06.736 Removing: /var/run/dpdk/spdk_pid348931 00:25:06.736 Removing: /var/run/dpdk/spdk_pid352414 00:25:06.736 Removing: /var/run/dpdk/spdk_pid395044 00:25:06.736 Removing: /var/run/dpdk/spdk_pid398589 00:25:06.736 Removing: /var/run/dpdk/spdk_pid406039 00:25:06.736 Removing: /var/run/dpdk/spdk_pid410411 00:25:06.736 Removing: /var/run/dpdk/spdk_pid413523 00:25:06.736 Removing: /var/run/dpdk/spdk_pid414178 00:25:06.736 Removing: /var/run/dpdk/spdk_pid427559 00:25:06.736 Removing: /var/run/dpdk/spdk_pid427927 00:25:06.736 Removing: /var/run/dpdk/spdk_pid431589 00:25:06.736 Removing: /var/run/dpdk/spdk_pid436966 00:25:06.736 Removing: /var/run/dpdk/spdk_pid439049 00:25:06.736 Removing: /var/run/dpdk/spdk_pid447630 00:25:06.736 Removing: /var/run/dpdk/spdk_pid469568 00:25:06.736 Removing: /var/run/dpdk/spdk_pid472947 00:25:06.736 Removing: /var/run/dpdk/spdk_pid485998 00:25:06.736 Removing: /var/run/dpdk/spdk_pid511052 00:25:06.995 Removing: /var/run/dpdk/spdk_pid511934 00:25:06.995 Removing: /var/run/dpdk/spdk_pid512838 00:25:06.995 Removing: /var/run/dpdk/spdk_pid516514 00:25:06.995 Removing: /var/run/dpdk/spdk_pid522767 00:25:06.995 Removing: /var/run/dpdk/spdk_pid523513 00:25:06.995 Removing: /var/run/dpdk/spdk_pid524220 00:25:06.995 Removing: /var/run/dpdk/spdk_pid525005 00:25:06.995 Removing: /var/run/dpdk/spdk_pid525299 00:25:06.995 Removing: /var/run/dpdk/spdk_pid529164 00:25:06.995 Removing: /var/run/dpdk/spdk_pid529166 00:25:06.995 Removing: /var/run/dpdk/spdk_pid532883 00:25:06.995 Removing: /var/run/dpdk/spdk_pid533415 00:25:06.995 Removing: /var/run/dpdk/spdk_pid533783 00:25:06.995 Removing: /var/run/dpdk/spdk_pid534330 00:25:06.995 Removing: /var/run/dpdk/spdk_pid534430 00:25:06.995 Removing: /var/run/dpdk/spdk_pid538381 00:25:06.995 Removing: /var/run/dpdk/spdk_pid538824 00:25:06.995 Removing: /var/run/dpdk/spdk_pid542429 00:25:06.995 Removing: /var/run/dpdk/spdk_pid545180 00:25:06.995 Removing: /var/run/dpdk/spdk_pid551450 00:25:06.995 Removing: /var/run/dpdk/spdk_pid551452 00:25:06.995 Removing: /var/run/dpdk/spdk_pid567969 00:25:06.995 Removing: /var/run/dpdk/spdk_pid568161 00:25:06.995 Removing: /var/run/dpdk/spdk_pid573120 00:25:06.995 Removing: /var/run/dpdk/spdk_pid573522 00:25:06.995 Removing: /var/run/dpdk/spdk_pid575033 00:25:06.995 Removing: /var/run/dpdk/spdk_pid577516 00:25:06.995 Clean 00:25:07.254 16:36:16 -- common/autotest_common.sh@1437 -- # return 0 00:25:07.254 16:36:16 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:25:07.254 16:36:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:07.254 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:25:07.254 16:36:16 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:25:07.254 16:36:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:07.254 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:25:07.254 16:36:16 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:07.254 16:36:16 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:25:07.254 16:36:16 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:25:07.254 16:36:16 -- spdk/autotest.sh@389 -- # hash lcov 00:25:07.254 16:36:16 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:07.254 16:36:16 -- spdk/autotest.sh@391 -- # hostname 00:25:07.254 16:36:16 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-29 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:25:07.514 geninfo: WARNING: invalid characters removed from testname! 00:25:29.446 16:36:35 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:29.446 16:36:37 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:30.381 16:36:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:32.281 16:36:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:33.657 16:36:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:35.560 16:36:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:36.938 16:36:45 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:37.197 16:36:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:37.197 16:36:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:37.197 16:36:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.197 16:36:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.197 16:36:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.197 16:36:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.197 16:36:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.197 16:36:45 -- paths/export.sh@5 -- $ export PATH 00:25:37.197 16:36:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.198 16:36:45 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:25:37.198 16:36:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:25:37.198 16:36:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714142205.XXXXXX 00:25:37.198 16:36:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714142205.uxIiWd 00:25:37.198 16:36:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:25:37.198 16:36:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:25:37.198 16:36:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:25:37.198 16:36:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:37.198 16:36:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:37.198 16:36:46 -- common/autobuild_common.sh@451 -- $ get_config_params 00:25:37.198 16:36:46 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:25:37.198 16:36:46 -- common/autotest_common.sh@10 -- $ set +x 00:25:37.198 16:36:46 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:25:37.198 16:36:46 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:25:37.198 16:36:46 -- pm/common@17 -- $ local monitor 00:25:37.198 16:36:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:37.198 16:36:46 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=591236 00:25:37.198 16:36:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:37.198 16:36:46 -- pm/common@21 -- $ date +%s 00:25:37.198 16:36:46 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=591238 00:25:37.198 16:36:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:37.198 16:36:46 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=591241 00:25:37.198 16:36:46 -- pm/common@21 -- $ date +%s 00:25:37.198 16:36:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:37.198 16:36:46 -- pm/common@21 -- $ date +%s 00:25:37.198 16:36:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714142206 00:25:37.198 16:36:46 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=591243 00:25:37.198 16:36:46 -- pm/common@26 -- $ sleep 1 00:25:37.198 16:36:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714142206 00:25:37.198 16:36:46 -- pm/common@21 -- $ date +%s 00:25:37.198 16:36:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714142206 00:25:37.198 16:36:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714142206 00:25:37.198 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714142206_collect-vmstat.pm.log 00:25:37.198 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714142206_collect-cpu-temp.pm.log 00:25:37.198 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714142206_collect-cpu-load.pm.log 00:25:37.198 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714142206_collect-bmc-pm.bmc.pm.log 00:25:38.134 16:36:47 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:25:38.134 16:36:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:25:38.134 16:36:47 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:38.134 16:36:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:38.134 16:36:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:38.134 16:36:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:38.134 16:36:47 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:38.134 16:36:47 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:38.134 16:36:47 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:38.134 16:36:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:38.134 16:36:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:38.134 16:36:47 -- pm/common@30 -- $ signal_monitor_resources TERM 00:25:38.134 16:36:47 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:25:38.134 16:36:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:38.134 16:36:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:38.134 16:36:47 -- pm/common@45 -- $ pid=591252 00:25:38.134 16:36:47 -- pm/common@52 -- $ sudo kill -TERM 591252 00:25:38.134 16:36:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:38.134 16:36:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:38.134 16:36:47 -- pm/common@45 -- $ pid=591249 00:25:38.134 16:36:47 -- pm/common@52 -- $ sudo kill -TERM 591249 00:25:38.134 16:36:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:38.134 16:36:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:38.393 16:36:47 -- pm/common@45 -- $ pid=591254 00:25:38.393 16:36:47 -- pm/common@52 -- $ sudo kill -TERM 591254 00:25:38.393 16:36:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:38.393 16:36:47 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:38.393 16:36:47 -- pm/common@45 -- $ pid=591261 00:25:38.393 16:36:47 -- pm/common@52 -- $ sudo kill -TERM 591261 00:25:38.393 + [[ -n 224363 ]] 00:25:38.393 + sudo kill 224363 00:25:38.405 [Pipeline] } 00:25:38.426 [Pipeline] // stage 00:25:38.431 [Pipeline] } 00:25:38.451 [Pipeline] // timeout 00:25:38.456 [Pipeline] } 00:25:38.474 [Pipeline] // catchError 00:25:38.480 [Pipeline] } 00:25:38.499 [Pipeline] // wrap 00:25:38.505 [Pipeline] } 00:25:38.524 [Pipeline] // catchError 00:25:38.533 [Pipeline] stage 00:25:38.536 [Pipeline] { (Epilogue) 00:25:38.552 [Pipeline] catchError 00:25:38.554 [Pipeline] { 00:25:38.569 [Pipeline] echo 00:25:38.571 Cleanup processes 00:25:38.577 [Pipeline] sh 00:25:38.860 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:38.860 591355 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:25:38.860 591590 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:38.875 [Pipeline] sh 00:25:39.159 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:39.159 ++ grep -v 'sudo pgrep' 00:25:39.159 ++ awk '{print $1}' 00:25:39.159 + sudo kill -9 591355 00:25:39.171 [Pipeline] sh 00:25:39.457 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:46.041 [Pipeline] sh 00:25:46.328 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:46.328 Artifacts sizes are good 00:25:46.344 [Pipeline] archiveArtifacts 00:25:46.355 Archiving artifacts 00:25:46.494 [Pipeline] sh 00:25:46.827 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:25:46.843 [Pipeline] cleanWs 00:25:46.855 [WS-CLEANUP] Deleting project workspace... 00:25:46.855 [WS-CLEANUP] Deferred wipeout is used... 00:25:46.862 [WS-CLEANUP] done 00:25:46.863 [Pipeline] } 00:25:46.880 [Pipeline] // catchError 00:25:46.890 [Pipeline] sh 00:25:47.177 + logger -p user.info -t JENKINS-CI 00:25:47.226 [Pipeline] } 00:25:47.242 [Pipeline] // stage 00:25:47.247 [Pipeline] } 00:25:47.263 [Pipeline] // node 00:25:47.268 [Pipeline] End of Pipeline 00:25:47.307 Finished: SUCCESS