00:00:00.000 Started by upstream project "autotest-per-patch" build number 130932 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.018 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.020 The recommended git tool is: git 00:00:00.020 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/short-fuzz-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.043 Fetching changes from the remote Git repository 00:00:00.044 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.073 Using shallow fetch with depth 1 00:00:00.073 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.073 > git --version # timeout=10 00:00:00.110 > git --version # 'git version 2.39.2' 00:00:00.110 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.145 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.145 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.883 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.895 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.907 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:04.907 > git config core.sparsecheckout # timeout=10 00:00:04.917 > git read-tree -mu HEAD # timeout=10 00:00:04.931 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:04.949 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:04.949 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:05.056 [Pipeline] Start of Pipeline 00:00:05.070 [Pipeline] library 00:00:05.072 Loading library shm_lib@master 00:00:05.072 Library shm_lib@master is cached. Copying from home. 00:00:05.088 [Pipeline] node 00:00:05.115 Running on WFP39 in /var/jenkins/workspace/short-fuzz-phy-autotest 00:00:05.117 [Pipeline] { 00:00:05.127 [Pipeline] catchError 00:00:05.129 [Pipeline] { 00:00:05.143 [Pipeline] wrap 00:00:05.151 [Pipeline] { 00:00:05.158 [Pipeline] stage 00:00:05.159 [Pipeline] { (Prologue) 00:00:05.372 [Pipeline] sh 00:00:05.653 + logger -p user.info -t JENKINS-CI 00:00:05.672 [Pipeline] echo 00:00:05.674 Node: WFP39 00:00:05.683 [Pipeline] sh 00:00:05.981 [Pipeline] setCustomBuildProperty 00:00:05.989 [Pipeline] echo 00:00:05.990 Cleanup processes 00:00:05.992 [Pipeline] sh 00:00:06.270 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.270 3918505 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.282 [Pipeline] sh 00:00:06.563 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:00:06.563 ++ grep -v 'sudo pgrep' 00:00:06.563 ++ awk '{print $1}' 00:00:06.563 + sudo kill -9 00:00:06.563 + true 00:00:06.597 [Pipeline] cleanWs 00:00:06.630 [WS-CLEANUP] Deleting project workspace... 00:00:06.630 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.672 [WS-CLEANUP] done 00:00:06.676 [Pipeline] setCustomBuildProperty 00:00:06.689 [Pipeline] sh 00:00:06.967 + sudo git config --global --replace-all safe.directory '*' 00:00:07.053 [Pipeline] httpRequest 00:00:07.968 [Pipeline] echo 00:00:07.970 Sorcerer 10.211.164.101 is alive 00:00:07.979 [Pipeline] retry 00:00:07.981 [Pipeline] { 00:00:07.995 [Pipeline] httpRequest 00:00:08.000 HttpMethod: GET 00:00:08.000 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.001 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.025 Response Code: HTTP/1.1 200 OK 00:00:08.025 Success: Status code 200 is in the accepted range: 200,404 00:00:08.026 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:35.470 [Pipeline] } 00:00:35.487 [Pipeline] // retry 00:00:35.494 [Pipeline] sh 00:00:35.776 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:35.792 [Pipeline] httpRequest 00:00:36.142 [Pipeline] echo 00:00:36.144 Sorcerer 10.211.164.101 is alive 00:00:36.153 [Pipeline] retry 00:00:36.155 [Pipeline] { 00:00:36.169 [Pipeline] httpRequest 00:00:36.174 HttpMethod: GET 00:00:36.174 URL: http://10.211.164.101/packages/spdk_3164389d2b5d131a44b53bbf5870c64d92bcea23.tar.gz 00:00:36.175 Sending request to url: http://10.211.164.101/packages/spdk_3164389d2b5d131a44b53bbf5870c64d92bcea23.tar.gz 00:00:36.180 Response Code: HTTP/1.1 200 OK 00:00:36.181 Success: Status code 200 is in the accepted range: 200,404 00:00:36.181 Saving response body to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk_3164389d2b5d131a44b53bbf5870c64d92bcea23.tar.gz 00:03:04.708 [Pipeline] } 00:03:04.725 [Pipeline] // retry 00:03:04.732 [Pipeline] sh 00:03:05.017 + tar --no-same-owner -xf spdk_3164389d2b5d131a44b53bbf5870c64d92bcea23.tar.gz 00:03:07.571 [Pipeline] sh 00:03:07.852 + git -C spdk log --oneline -n5 00:03:07.852 3164389d2 nvmf/tcp: remove await_req TAILQ 00:03:07.852 fda8e315d nvmf/tcp: add nvmf_tcp_qpair_process() helper function 00:03:07.852 0f32e40e7 nvmf/tcp: simplify nvmf_tcp_poll_group_poll event counting 00:03:07.852 35bc9df76 event: shrink size of event message pool 00:03:07.852 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:03:07.862 [Pipeline] } 00:03:07.876 [Pipeline] // stage 00:03:07.883 [Pipeline] stage 00:03:07.885 [Pipeline] { (Prepare) 00:03:07.899 [Pipeline] writeFile 00:03:07.914 [Pipeline] sh 00:03:08.196 + logger -p user.info -t JENKINS-CI 00:03:08.208 [Pipeline] sh 00:03:08.490 + logger -p user.info -t JENKINS-CI 00:03:08.502 [Pipeline] sh 00:03:08.784 + cat autorun-spdk.conf 00:03:08.784 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:08.784 SPDK_TEST_FUZZER_SHORT=1 00:03:08.784 SPDK_TEST_FUZZER=1 00:03:08.784 SPDK_TEST_SETUP=1 00:03:08.784 SPDK_RUN_UBSAN=1 00:03:08.791 RUN_NIGHTLY=0 00:03:08.795 [Pipeline] readFile 00:03:08.817 [Pipeline] withEnv 00:03:08.819 [Pipeline] { 00:03:08.831 [Pipeline] sh 00:03:09.113 + set -ex 00:03:09.113 + [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf ]] 00:03:09.113 + source /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:09.113 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:09.113 ++ SPDK_TEST_FUZZER_SHORT=1 00:03:09.113 ++ SPDK_TEST_FUZZER=1 00:03:09.113 ++ SPDK_TEST_SETUP=1 00:03:09.113 ++ SPDK_RUN_UBSAN=1 00:03:09.113 ++ RUN_NIGHTLY=0 00:03:09.113 + case $SPDK_TEST_NVMF_NICS in 00:03:09.113 + DRIVERS= 00:03:09.113 + [[ -n '' ]] 00:03:09.113 + exit 0 00:03:09.122 [Pipeline] } 00:03:09.136 [Pipeline] // withEnv 00:03:09.141 [Pipeline] } 00:03:09.155 [Pipeline] // stage 00:03:09.164 [Pipeline] catchError 00:03:09.166 [Pipeline] { 00:03:09.180 [Pipeline] timeout 00:03:09.180 Timeout set to expire in 30 min 00:03:09.182 [Pipeline] { 00:03:09.197 [Pipeline] stage 00:03:09.199 [Pipeline] { (Tests) 00:03:09.213 [Pipeline] sh 00:03:09.495 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/short-fuzz-phy-autotest 00:03:09.495 ++ readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest 00:03:09.495 + DIR_ROOT=/var/jenkins/workspace/short-fuzz-phy-autotest 00:03:09.495 + [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest ]] 00:03:09.495 + DIR_SPDK=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:09.495 + DIR_OUTPUT=/var/jenkins/workspace/short-fuzz-phy-autotest/output 00:03:09.495 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk ]] 00:03:09.495 + [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:03:09.495 + mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/output 00:03:09.495 + [[ -d /var/jenkins/workspace/short-fuzz-phy-autotest/output ]] 00:03:09.495 + [[ short-fuzz-phy-autotest == pkgdep-* ]] 00:03:09.495 + cd /var/jenkins/workspace/short-fuzz-phy-autotest 00:03:09.495 + source /etc/os-release 00:03:09.495 ++ NAME='Fedora Linux' 00:03:09.495 ++ VERSION='39 (Cloud Edition)' 00:03:09.495 ++ ID=fedora 00:03:09.495 ++ VERSION_ID=39 00:03:09.495 ++ VERSION_CODENAME= 00:03:09.495 ++ PLATFORM_ID=platform:f39 00:03:09.495 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:09.495 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:09.495 ++ LOGO=fedora-logo-icon 00:03:09.495 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:09.495 ++ HOME_URL=https://fedoraproject.org/ 00:03:09.495 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:09.495 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:09.495 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:09.495 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:09.495 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:09.495 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:09.495 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:09.495 ++ SUPPORT_END=2024-11-12 00:03:09.495 ++ VARIANT='Cloud Edition' 00:03:09.495 ++ VARIANT_ID=cloud 00:03:09.495 + uname -a 00:03:09.495 Linux spdk-wfp-39 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:03:09.495 + sudo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:03:12.784 Hugepages 00:03:12.784 node hugesize free / total 00:03:12.784 node0 1048576kB 0 / 0 00:03:12.784 node0 2048kB 0 / 0 00:03:12.784 node1 1048576kB 0 / 0 00:03:12.784 node1 2048kB 0 / 0 00:03:12.784 00:03:12.784 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.784 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:12.784 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:12.784 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:12.784 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:12.784 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:12.784 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:12.784 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:12.784 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:12.784 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:12.784 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:12.784 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:12.784 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:12.784 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:12.784 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:12.784 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:12.784 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:12.784 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:12.784 + rm -f /tmp/spdk-ld-path 00:03:12.784 + source autorun-spdk.conf 00:03:12.784 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:12.784 ++ SPDK_TEST_FUZZER_SHORT=1 00:03:12.784 ++ SPDK_TEST_FUZZER=1 00:03:12.784 ++ SPDK_TEST_SETUP=1 00:03:12.784 ++ SPDK_RUN_UBSAN=1 00:03:12.784 ++ RUN_NIGHTLY=0 00:03:12.784 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:12.784 + [[ -n '' ]] 00:03:12.784 + sudo git config --global --add safe.directory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:12.784 + for M in /var/spdk/build-*-manifest.txt 00:03:12.784 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:12.784 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:03:12.784 + for M in /var/spdk/build-*-manifest.txt 00:03:12.784 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:12.784 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:03:12.784 + for M in /var/spdk/build-*-manifest.txt 00:03:12.784 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:12.784 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/short-fuzz-phy-autotest/output/ 00:03:12.784 ++ uname 00:03:12.784 + [[ Linux == \L\i\n\u\x ]] 00:03:12.784 + sudo dmesg -T 00:03:12.784 + sudo dmesg --clear 00:03:12.784 + dmesg_pid=3919973 00:03:12.784 + sudo dmesg -Tw 00:03:12.784 + [[ Fedora Linux == FreeBSD ]] 00:03:12.784 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:12.784 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:12.784 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:12.784 + [[ -x /usr/src/fio-static/fio ]] 00:03:12.784 + export FIO_BIN=/usr/src/fio-static/fio 00:03:12.784 + FIO_BIN=/usr/src/fio-static/fio 00:03:12.784 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\s\h\o\r\t\-\f\u\z\z\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:12.784 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:12.784 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:12.784 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:12.784 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:12.784 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:12.784 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:12.784 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:12.784 + spdk/autorun.sh /var/jenkins/workspace/short-fuzz-phy-autotest/autorun-spdk.conf 00:03:12.784 Test configuration: 00:03:12.784 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:12.784 SPDK_TEST_FUZZER_SHORT=1 00:03:12.784 SPDK_TEST_FUZZER=1 00:03:12.784 SPDK_TEST_SETUP=1 00:03:12.784 SPDK_RUN_UBSAN=1 00:03:12.784 RUN_NIGHTLY=0 01:40:42 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:03:12.784 01:40:42 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:03:12.784 01:40:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:12.784 01:40:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:12.784 01:40:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:12.784 01:40:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:12.784 01:40:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.784 01:40:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.784 01:40:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.784 01:40:42 -- paths/export.sh@5 -- $ export PATH 00:03:12.784 01:40:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.784 01:40:42 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:03:12.785 01:40:42 -- common/autobuild_common.sh@486 -- $ date +%s 00:03:12.785 01:40:42 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728430842.XXXXXX 00:03:12.785 01:40:42 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728430842.KlnDzN 00:03:12.785 01:40:42 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:03:12.785 01:40:42 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:03:12.785 01:40:42 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:03:12.785 01:40:42 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:12.785 01:40:42 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:12.785 01:40:42 -- common/autobuild_common.sh@502 -- $ get_config_params 00:03:12.785 01:40:42 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:03:12.785 01:40:42 -- common/autotest_common.sh@10 -- $ set +x 00:03:12.785 01:40:42 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:12.785 01:40:42 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:03:12.785 01:40:42 -- pm/common@17 -- $ local monitor 00:03:12.785 01:40:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.785 01:40:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.785 01:40:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.785 01:40:42 -- pm/common@21 -- $ date +%s 00:03:12.785 01:40:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.785 01:40:42 -- pm/common@21 -- $ date +%s 00:03:12.785 01:40:42 -- pm/common@25 -- $ sleep 1 00:03:12.785 01:40:42 -- pm/common@21 -- $ date +%s 00:03:12.785 01:40:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728430842 00:03:12.785 01:40:42 -- pm/common@21 -- $ date +%s 00:03:12.785 01:40:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728430842 00:03:12.785 01:40:42 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728430842 00:03:12.785 01:40:42 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728430842 00:03:13.044 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728430842_collect-cpu-load.pm.log 00:03:13.044 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728430842_collect-vmstat.pm.log 00:03:13.044 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728430842_collect-cpu-temp.pm.log 00:03:13.044 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728430842_collect-bmc-pm.bmc.pm.log 00:03:13.983 01:40:43 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:03:13.983 01:40:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:13.983 01:40:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:13.983 01:40:43 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:03:13.983 01:40:43 -- spdk/autobuild.sh@16 -- $ date -u 00:03:13.983 Tue Oct 8 11:40:43 PM UTC 2024 00:03:13.983 01:40:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:13.983 v25.01-pre-39-g3164389d2 00:03:13.983 01:40:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:13.983 01:40:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:13.983 01:40:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:13.983 01:40:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:13.983 01:40:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:13.983 01:40:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.983 ************************************ 00:03:13.983 START TEST ubsan 00:03:13.983 ************************************ 00:03:13.983 01:40:43 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:13.983 using ubsan 00:03:13.983 00:03:13.983 real 0m0.001s 00:03:13.983 user 0m0.000s 00:03:13.983 sys 0m0.001s 00:03:13.983 01:40:43 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:13.983 01:40:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:13.983 ************************************ 00:03:13.983 END TEST ubsan 00:03:13.983 ************************************ 00:03:13.983 01:40:43 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:13.983 01:40:43 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:13.983 01:40:43 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:13.983 01:40:43 -- spdk/autobuild.sh@51 -- $ [[ 1 -eq 1 ]] 00:03:13.983 01:40:43 -- spdk/autobuild.sh@52 -- $ llvm_precompile 00:03:13.983 01:40:43 -- common/autobuild_common.sh@438 -- $ run_test autobuild_llvm_precompile _llvm_precompile 00:03:13.983 01:40:43 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:13.983 01:40:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:13.983 01:40:43 -- common/autotest_common.sh@10 -- $ set +x 00:03:13.983 ************************************ 00:03:13.983 START TEST autobuild_llvm_precompile 00:03:13.983 ************************************ 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autotest_common.sh@1125 -- $ _llvm_precompile 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ clang --version 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@32 -- $ [[ clang version 17.0.6 (Fedora 17.0.6-2.fc39) 00:03:13.983 Target: x86_64-redhat-linux-gnu 00:03:13.983 Thread model: posix 00:03:13.983 InstalledDir: /usr/bin =~ version (([0-9]+).([0-9]+).([0-9]+)) ]] 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@33 -- $ clang_num=17 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ export CC=clang-17 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@35 -- $ CC=clang-17 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ export CXX=clang++-17 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@36 -- $ CXX=clang++-17 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@38 -- $ fuzzer_libs=(/usr/lib*/clang/@("$clang_num"|"$clang_version")/lib/*linux*/libclang_rt.fuzzer_no_main?(-x86_64).a) 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@39 -- $ fuzzer_lib=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@40 -- $ [[ -e /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a ]] 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@42 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a' 00:03:13.983 01:40:43 autobuild_llvm_precompile -- common/autobuild_common.sh@44 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:03:14.243 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:03:14.243 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:03:14.811 Using 'verbs' RDMA provider 00:03:30.699 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:42.913 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:42.913 Creating mk/config.mk...done. 00:03:42.913 Creating mk/cc.flags.mk...done. 00:03:42.913 Type 'make' to build. 00:03:42.913 00:03:42.913 real 0m28.087s 00:03:42.913 user 0m12.568s 00:03:42.913 sys 0m14.767s 00:03:42.913 01:41:11 autobuild_llvm_precompile -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:42.913 01:41:11 autobuild_llvm_precompile -- common/autotest_common.sh@10 -- $ set +x 00:03:42.913 ************************************ 00:03:42.913 END TEST autobuild_llvm_precompile 00:03:42.913 ************************************ 00:03:42.913 01:41:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:42.913 01:41:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:42.913 01:41:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:42.913 01:41:11 -- spdk/autobuild.sh@62 -- $ [[ 1 -eq 1 ]] 00:03:42.913 01:41:11 -- spdk/autobuild.sh@64 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-fuzzer=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:03:42.913 Using default SPDK env in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:03:42.913 Using default DPDK in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:03:42.913 Using 'verbs' RDMA provider 00:03:56.052 Configuring ISA-L (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:06.037 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:06.864 Creating mk/config.mk...done. 00:04:06.864 Creating mk/cc.flags.mk...done. 00:04:06.864 Type 'make' to build. 00:04:06.864 01:41:36 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:04:06.864 01:41:36 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:06.864 01:41:36 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:06.864 01:41:36 -- common/autotest_common.sh@10 -- $ set +x 00:04:06.864 ************************************ 00:04:06.864 START TEST make 00:04:06.864 ************************************ 00:04:06.864 01:41:36 make -- common/autotest_common.sh@1125 -- $ make -j72 00:04:07.123 make[1]: Nothing to be done for 'all'. 00:04:09.033 The Meson build system 00:04:09.033 Version: 1.5.0 00:04:09.033 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user 00:04:09.033 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:09.033 Build type: native build 00:04:09.033 Project name: libvfio-user 00:04:09.033 Project version: 0.0.1 00:04:09.033 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:04:09.033 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:04:09.033 Host machine cpu family: x86_64 00:04:09.033 Host machine cpu: x86_64 00:04:09.033 Run-time dependency threads found: YES 00:04:09.033 Library dl found: YES 00:04:09.033 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:09.033 Run-time dependency json-c found: YES 0.17 00:04:09.033 Run-time dependency cmocka found: YES 1.1.7 00:04:09.033 Program pytest-3 found: NO 00:04:09.033 Program flake8 found: NO 00:04:09.033 Program misspell-fixer found: NO 00:04:09.033 Program restructuredtext-lint found: NO 00:04:09.033 Program valgrind found: YES (/usr/bin/valgrind) 00:04:09.033 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:09.033 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:09.033 Compiler for C supports arguments -Wwrite-strings: YES 00:04:09.033 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:09.033 Program test-lspci.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:09.033 Program test-linkage.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:09.033 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:09.033 Build targets in project: 8 00:04:09.033 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:09.033 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:09.033 00:04:09.033 libvfio-user 0.0.1 00:04:09.034 00:04:09.034 User defined options 00:04:09.034 buildtype : debug 00:04:09.034 default_library: static 00:04:09.034 libdir : /usr/local/lib 00:04:09.034 00:04:09.034 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:09.291 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:09.291 [1/36] Compiling C object samples/lspci.p/lspci.c.o 00:04:09.291 [2/36] Compiling C object lib/libvfio-user.a.p/irq.c.o 00:04:09.291 [3/36] Compiling C object lib/libvfio-user.a.p/migration.c.o 00:04:09.291 [4/36] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:09.291 [5/36] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:09.291 [6/36] Compiling C object lib/libvfio-user.a.p/tran.c.o 00:04:09.291 [7/36] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:09.291 [8/36] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:09.291 [9/36] Compiling C object lib/libvfio-user.a.p/pci.c.o 00:04:09.291 [10/36] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:09.291 [11/36] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:09.291 [12/36] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:09.291 [13/36] Compiling C object test/unit_tests.p/mocks.c.o 00:04:09.291 [14/36] Compiling C object lib/libvfio-user.a.p/dma.c.o 00:04:09.291 [15/36] Compiling C object lib/libvfio-user.a.p/pci_caps.c.o 00:04:09.291 [16/36] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:09.291 [17/36] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:09.291 [18/36] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:09.291 [19/36] Compiling C object lib/libvfio-user.a.p/tran_sock.c.o 00:04:09.291 [20/36] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:09.291 [21/36] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:09.291 [22/36] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:09.291 [23/36] Compiling C object samples/server.p/server.c.o 00:04:09.291 [24/36] Compiling C object samples/null.p/null.c.o 00:04:09.291 [25/36] Compiling C object samples/client.p/client.c.o 00:04:09.291 [26/36] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:09.291 [27/36] Compiling C object lib/libvfio-user.a.p/libvfio-user.c.o 00:04:09.291 [28/36] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:09.291 [29/36] Linking static target lib/libvfio-user.a 00:04:09.549 [30/36] Linking target samples/client 00:04:09.549 [31/36] Linking target samples/gpio-pci-idio-16 00:04:09.549 [32/36] Linking target test/unit_tests 00:04:09.549 [33/36] Linking target samples/server 00:04:09.549 [34/36] Linking target samples/shadow_ioeventfd_server 00:04:09.549 [35/36] Linking target samples/null 00:04:09.549 [36/36] Linking target samples/lspci 00:04:09.549 INFO: autodetecting backend as ninja 00:04:09.549 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:09.549 DESTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:09.806 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:09.806 ninja: no work to do. 00:04:16.366 The Meson build system 00:04:16.366 Version: 1.5.0 00:04:16.366 Source dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk 00:04:16.366 Build dir: /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp 00:04:16.366 Build type: native build 00:04:16.366 Program cat found: YES (/usr/bin/cat) 00:04:16.366 Project name: DPDK 00:04:16.366 Project version: 24.03.0 00:04:16.366 C compiler for the host machine: clang-17 (clang 17.0.6 "clang version 17.0.6 (Fedora 17.0.6-2.fc39)") 00:04:16.366 C linker for the host machine: clang-17 ld.bfd 2.40-14 00:04:16.366 Host machine cpu family: x86_64 00:04:16.366 Host machine cpu: x86_64 00:04:16.366 Message: ## Building in Developer Mode ## 00:04:16.366 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:16.366 Program check-symbols.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:16.366 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:16.366 Program python3 found: YES (/usr/bin/python3) 00:04:16.366 Program cat found: YES (/usr/bin/cat) 00:04:16.366 Compiler for C supports arguments -march=native: YES 00:04:16.366 Checking for size of "void *" : 8 00:04:16.366 Checking for size of "void *" : 8 (cached) 00:04:16.366 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:16.366 Library m found: YES 00:04:16.366 Library numa found: YES 00:04:16.366 Has header "numaif.h" : YES 00:04:16.366 Library fdt found: NO 00:04:16.366 Library execinfo found: NO 00:04:16.366 Has header "execinfo.h" : YES 00:04:16.366 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:16.366 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:16.366 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:16.366 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:16.366 Run-time dependency openssl found: YES 3.1.1 00:04:16.366 Run-time dependency libpcap found: YES 1.10.4 00:04:16.366 Has header "pcap.h" with dependency libpcap: YES 00:04:16.366 Compiler for C supports arguments -Wcast-qual: YES 00:04:16.366 Compiler for C supports arguments -Wdeprecated: YES 00:04:16.366 Compiler for C supports arguments -Wformat: YES 00:04:16.366 Compiler for C supports arguments -Wformat-nonliteral: YES 00:04:16.366 Compiler for C supports arguments -Wformat-security: YES 00:04:16.366 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:16.366 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:16.366 Compiler for C supports arguments -Wnested-externs: YES 00:04:16.366 Compiler for C supports arguments -Wold-style-definition: YES 00:04:16.366 Compiler for C supports arguments -Wpointer-arith: YES 00:04:16.366 Compiler for C supports arguments -Wsign-compare: YES 00:04:16.366 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:16.366 Compiler for C supports arguments -Wundef: YES 00:04:16.366 Compiler for C supports arguments -Wwrite-strings: YES 00:04:16.366 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:16.366 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:04:16.366 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:16.366 Program objdump found: YES (/usr/bin/objdump) 00:04:16.366 Compiler for C supports arguments -mavx512f: YES 00:04:16.366 Checking if "AVX512 checking" compiles: YES 00:04:16.366 Fetching value of define "__SSE4_2__" : 1 00:04:16.366 Fetching value of define "__AES__" : 1 00:04:16.366 Fetching value of define "__AVX__" : 1 00:04:16.366 Fetching value of define "__AVX2__" : 1 00:04:16.366 Fetching value of define "__AVX512BW__" : 1 00:04:16.366 Fetching value of define "__AVX512CD__" : 1 00:04:16.366 Fetching value of define "__AVX512DQ__" : 1 00:04:16.366 Fetching value of define "__AVX512F__" : 1 00:04:16.366 Fetching value of define "__AVX512VL__" : 1 00:04:16.366 Fetching value of define "__PCLMUL__" : 1 00:04:16.366 Fetching value of define "__RDRND__" : 1 00:04:16.366 Fetching value of define "__RDSEED__" : 1 00:04:16.367 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:16.367 Fetching value of define "__znver1__" : (undefined) 00:04:16.367 Fetching value of define "__znver2__" : (undefined) 00:04:16.367 Fetching value of define "__znver3__" : (undefined) 00:04:16.367 Fetching value of define "__znver4__" : (undefined) 00:04:16.367 Compiler for C supports arguments -Wno-format-truncation: NO 00:04:16.367 Message: lib/log: Defining dependency "log" 00:04:16.367 Message: lib/kvargs: Defining dependency "kvargs" 00:04:16.367 Message: lib/telemetry: Defining dependency "telemetry" 00:04:16.367 Checking for function "getentropy" : NO 00:04:16.367 Message: lib/eal: Defining dependency "eal" 00:04:16.367 Message: lib/ring: Defining dependency "ring" 00:04:16.367 Message: lib/rcu: Defining dependency "rcu" 00:04:16.367 Message: lib/mempool: Defining dependency "mempool" 00:04:16.367 Message: lib/mbuf: Defining dependency "mbuf" 00:04:16.367 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:16.367 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:16.367 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:16.367 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:16.367 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:16.367 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:16.367 Compiler for C supports arguments -mpclmul: YES 00:04:16.367 Compiler for C supports arguments -maes: YES 00:04:16.367 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:16.367 Compiler for C supports arguments -mavx512bw: YES 00:04:16.367 Compiler for C supports arguments -mavx512dq: YES 00:04:16.367 Compiler for C supports arguments -mavx512vl: YES 00:04:16.367 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:16.367 Compiler for C supports arguments -mavx2: YES 00:04:16.367 Compiler for C supports arguments -mavx: YES 00:04:16.367 Message: lib/net: Defining dependency "net" 00:04:16.367 Message: lib/meter: Defining dependency "meter" 00:04:16.367 Message: lib/ethdev: Defining dependency "ethdev" 00:04:16.367 Message: lib/pci: Defining dependency "pci" 00:04:16.367 Message: lib/cmdline: Defining dependency "cmdline" 00:04:16.367 Message: lib/hash: Defining dependency "hash" 00:04:16.367 Message: lib/timer: Defining dependency "timer" 00:04:16.367 Message: lib/compressdev: Defining dependency "compressdev" 00:04:16.367 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:16.367 Message: lib/dmadev: Defining dependency "dmadev" 00:04:16.367 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:16.367 Message: lib/power: Defining dependency "power" 00:04:16.367 Message: lib/reorder: Defining dependency "reorder" 00:04:16.367 Message: lib/security: Defining dependency "security" 00:04:16.367 Has header "linux/userfaultfd.h" : YES 00:04:16.367 Has header "linux/vduse.h" : YES 00:04:16.367 Message: lib/vhost: Defining dependency "vhost" 00:04:16.367 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:04:16.367 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:16.367 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:16.367 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:16.367 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:16.367 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:16.367 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:16.367 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:16.367 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:16.367 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:16.367 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:16.367 Configuring doxy-api-html.conf using configuration 00:04:16.367 Configuring doxy-api-man.conf using configuration 00:04:16.367 Program mandb found: YES (/usr/bin/mandb) 00:04:16.367 Program sphinx-build found: NO 00:04:16.367 Configuring rte_build_config.h using configuration 00:04:16.367 Message: 00:04:16.367 ================= 00:04:16.367 Applications Enabled 00:04:16.367 ================= 00:04:16.367 00:04:16.367 apps: 00:04:16.367 00:04:16.367 00:04:16.367 Message: 00:04:16.367 ================= 00:04:16.367 Libraries Enabled 00:04:16.367 ================= 00:04:16.367 00:04:16.367 libs: 00:04:16.367 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:16.367 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:16.367 cryptodev, dmadev, power, reorder, security, vhost, 00:04:16.367 00:04:16.367 Message: 00:04:16.367 =============== 00:04:16.367 Drivers Enabled 00:04:16.367 =============== 00:04:16.367 00:04:16.367 common: 00:04:16.367 00:04:16.367 bus: 00:04:16.367 pci, vdev, 00:04:16.367 mempool: 00:04:16.367 ring, 00:04:16.367 dma: 00:04:16.367 00:04:16.367 net: 00:04:16.367 00:04:16.367 crypto: 00:04:16.367 00:04:16.367 compress: 00:04:16.367 00:04:16.367 vdpa: 00:04:16.367 00:04:16.367 00:04:16.367 Message: 00:04:16.367 ================= 00:04:16.367 Content Skipped 00:04:16.367 ================= 00:04:16.367 00:04:16.367 apps: 00:04:16.367 dumpcap: explicitly disabled via build config 00:04:16.367 graph: explicitly disabled via build config 00:04:16.367 pdump: explicitly disabled via build config 00:04:16.367 proc-info: explicitly disabled via build config 00:04:16.367 test-acl: explicitly disabled via build config 00:04:16.367 test-bbdev: explicitly disabled via build config 00:04:16.367 test-cmdline: explicitly disabled via build config 00:04:16.367 test-compress-perf: explicitly disabled via build config 00:04:16.367 test-crypto-perf: explicitly disabled via build config 00:04:16.367 test-dma-perf: explicitly disabled via build config 00:04:16.367 test-eventdev: explicitly disabled via build config 00:04:16.367 test-fib: explicitly disabled via build config 00:04:16.367 test-flow-perf: explicitly disabled via build config 00:04:16.367 test-gpudev: explicitly disabled via build config 00:04:16.367 test-mldev: explicitly disabled via build config 00:04:16.367 test-pipeline: explicitly disabled via build config 00:04:16.367 test-pmd: explicitly disabled via build config 00:04:16.367 test-regex: explicitly disabled via build config 00:04:16.367 test-sad: explicitly disabled via build config 00:04:16.367 test-security-perf: explicitly disabled via build config 00:04:16.367 00:04:16.367 libs: 00:04:16.367 argparse: explicitly disabled via build config 00:04:16.367 metrics: explicitly disabled via build config 00:04:16.368 acl: explicitly disabled via build config 00:04:16.368 bbdev: explicitly disabled via build config 00:04:16.368 bitratestats: explicitly disabled via build config 00:04:16.368 bpf: explicitly disabled via build config 00:04:16.368 cfgfile: explicitly disabled via build config 00:04:16.368 distributor: explicitly disabled via build config 00:04:16.368 efd: explicitly disabled via build config 00:04:16.368 eventdev: explicitly disabled via build config 00:04:16.368 dispatcher: explicitly disabled via build config 00:04:16.368 gpudev: explicitly disabled via build config 00:04:16.368 gro: explicitly disabled via build config 00:04:16.368 gso: explicitly disabled via build config 00:04:16.368 ip_frag: explicitly disabled via build config 00:04:16.368 jobstats: explicitly disabled via build config 00:04:16.368 latencystats: explicitly disabled via build config 00:04:16.368 lpm: explicitly disabled via build config 00:04:16.368 member: explicitly disabled via build config 00:04:16.368 pcapng: explicitly disabled via build config 00:04:16.368 rawdev: explicitly disabled via build config 00:04:16.368 regexdev: explicitly disabled via build config 00:04:16.368 mldev: explicitly disabled via build config 00:04:16.368 rib: explicitly disabled via build config 00:04:16.368 sched: explicitly disabled via build config 00:04:16.368 stack: explicitly disabled via build config 00:04:16.368 ipsec: explicitly disabled via build config 00:04:16.368 pdcp: explicitly disabled via build config 00:04:16.368 fib: explicitly disabled via build config 00:04:16.368 port: explicitly disabled via build config 00:04:16.368 pdump: explicitly disabled via build config 00:04:16.368 table: explicitly disabled via build config 00:04:16.368 pipeline: explicitly disabled via build config 00:04:16.368 graph: explicitly disabled via build config 00:04:16.368 node: explicitly disabled via build config 00:04:16.368 00:04:16.368 drivers: 00:04:16.368 common/cpt: not in enabled drivers build config 00:04:16.368 common/dpaax: not in enabled drivers build config 00:04:16.368 common/iavf: not in enabled drivers build config 00:04:16.368 common/idpf: not in enabled drivers build config 00:04:16.368 common/ionic: not in enabled drivers build config 00:04:16.368 common/mvep: not in enabled drivers build config 00:04:16.368 common/octeontx: not in enabled drivers build config 00:04:16.368 bus/auxiliary: not in enabled drivers build config 00:04:16.368 bus/cdx: not in enabled drivers build config 00:04:16.368 bus/dpaa: not in enabled drivers build config 00:04:16.368 bus/fslmc: not in enabled drivers build config 00:04:16.368 bus/ifpga: not in enabled drivers build config 00:04:16.368 bus/platform: not in enabled drivers build config 00:04:16.368 bus/uacce: not in enabled drivers build config 00:04:16.368 bus/vmbus: not in enabled drivers build config 00:04:16.368 common/cnxk: not in enabled drivers build config 00:04:16.368 common/mlx5: not in enabled drivers build config 00:04:16.368 common/nfp: not in enabled drivers build config 00:04:16.368 common/nitrox: not in enabled drivers build config 00:04:16.368 common/qat: not in enabled drivers build config 00:04:16.368 common/sfc_efx: not in enabled drivers build config 00:04:16.368 mempool/bucket: not in enabled drivers build config 00:04:16.368 mempool/cnxk: not in enabled drivers build config 00:04:16.368 mempool/dpaa: not in enabled drivers build config 00:04:16.368 mempool/dpaa2: not in enabled drivers build config 00:04:16.368 mempool/octeontx: not in enabled drivers build config 00:04:16.368 mempool/stack: not in enabled drivers build config 00:04:16.368 dma/cnxk: not in enabled drivers build config 00:04:16.368 dma/dpaa: not in enabled drivers build config 00:04:16.368 dma/dpaa2: not in enabled drivers build config 00:04:16.368 dma/hisilicon: not in enabled drivers build config 00:04:16.368 dma/idxd: not in enabled drivers build config 00:04:16.368 dma/ioat: not in enabled drivers build config 00:04:16.368 dma/skeleton: not in enabled drivers build config 00:04:16.368 net/af_packet: not in enabled drivers build config 00:04:16.368 net/af_xdp: not in enabled drivers build config 00:04:16.368 net/ark: not in enabled drivers build config 00:04:16.368 net/atlantic: not in enabled drivers build config 00:04:16.368 net/avp: not in enabled drivers build config 00:04:16.368 net/axgbe: not in enabled drivers build config 00:04:16.368 net/bnx2x: not in enabled drivers build config 00:04:16.368 net/bnxt: not in enabled drivers build config 00:04:16.368 net/bonding: not in enabled drivers build config 00:04:16.368 net/cnxk: not in enabled drivers build config 00:04:16.368 net/cpfl: not in enabled drivers build config 00:04:16.368 net/cxgbe: not in enabled drivers build config 00:04:16.368 net/dpaa: not in enabled drivers build config 00:04:16.368 net/dpaa2: not in enabled drivers build config 00:04:16.368 net/e1000: not in enabled drivers build config 00:04:16.368 net/ena: not in enabled drivers build config 00:04:16.368 net/enetc: not in enabled drivers build config 00:04:16.368 net/enetfec: not in enabled drivers build config 00:04:16.368 net/enic: not in enabled drivers build config 00:04:16.368 net/failsafe: not in enabled drivers build config 00:04:16.368 net/fm10k: not in enabled drivers build config 00:04:16.368 net/gve: not in enabled drivers build config 00:04:16.368 net/hinic: not in enabled drivers build config 00:04:16.368 net/hns3: not in enabled drivers build config 00:04:16.368 net/i40e: not in enabled drivers build config 00:04:16.368 net/iavf: not in enabled drivers build config 00:04:16.368 net/ice: not in enabled drivers build config 00:04:16.368 net/idpf: not in enabled drivers build config 00:04:16.368 net/igc: not in enabled drivers build config 00:04:16.368 net/ionic: not in enabled drivers build config 00:04:16.368 net/ipn3ke: not in enabled drivers build config 00:04:16.368 net/ixgbe: not in enabled drivers build config 00:04:16.368 net/mana: not in enabled drivers build config 00:04:16.368 net/memif: not in enabled drivers build config 00:04:16.368 net/mlx4: not in enabled drivers build config 00:04:16.368 net/mlx5: not in enabled drivers build config 00:04:16.368 net/mvneta: not in enabled drivers build config 00:04:16.368 net/mvpp2: not in enabled drivers build config 00:04:16.368 net/netvsc: not in enabled drivers build config 00:04:16.368 net/nfb: not in enabled drivers build config 00:04:16.368 net/nfp: not in enabled drivers build config 00:04:16.368 net/ngbe: not in enabled drivers build config 00:04:16.368 net/null: not in enabled drivers build config 00:04:16.368 net/octeontx: not in enabled drivers build config 00:04:16.368 net/octeon_ep: not in enabled drivers build config 00:04:16.369 net/pcap: not in enabled drivers build config 00:04:16.369 net/pfe: not in enabled drivers build config 00:04:16.369 net/qede: not in enabled drivers build config 00:04:16.369 net/ring: not in enabled drivers build config 00:04:16.369 net/sfc: not in enabled drivers build config 00:04:16.369 net/softnic: not in enabled drivers build config 00:04:16.369 net/tap: not in enabled drivers build config 00:04:16.369 net/thunderx: not in enabled drivers build config 00:04:16.369 net/txgbe: not in enabled drivers build config 00:04:16.369 net/vdev_netvsc: not in enabled drivers build config 00:04:16.369 net/vhost: not in enabled drivers build config 00:04:16.369 net/virtio: not in enabled drivers build config 00:04:16.369 net/vmxnet3: not in enabled drivers build config 00:04:16.369 raw/*: missing internal dependency, "rawdev" 00:04:16.369 crypto/armv8: not in enabled drivers build config 00:04:16.369 crypto/bcmfs: not in enabled drivers build config 00:04:16.369 crypto/caam_jr: not in enabled drivers build config 00:04:16.369 crypto/ccp: not in enabled drivers build config 00:04:16.369 crypto/cnxk: not in enabled drivers build config 00:04:16.369 crypto/dpaa_sec: not in enabled drivers build config 00:04:16.369 crypto/dpaa2_sec: not in enabled drivers build config 00:04:16.369 crypto/ipsec_mb: not in enabled drivers build config 00:04:16.369 crypto/mlx5: not in enabled drivers build config 00:04:16.369 crypto/mvsam: not in enabled drivers build config 00:04:16.369 crypto/nitrox: not in enabled drivers build config 00:04:16.369 crypto/null: not in enabled drivers build config 00:04:16.369 crypto/octeontx: not in enabled drivers build config 00:04:16.369 crypto/openssl: not in enabled drivers build config 00:04:16.369 crypto/scheduler: not in enabled drivers build config 00:04:16.369 crypto/uadk: not in enabled drivers build config 00:04:16.369 crypto/virtio: not in enabled drivers build config 00:04:16.369 compress/isal: not in enabled drivers build config 00:04:16.369 compress/mlx5: not in enabled drivers build config 00:04:16.369 compress/nitrox: not in enabled drivers build config 00:04:16.369 compress/octeontx: not in enabled drivers build config 00:04:16.369 compress/zlib: not in enabled drivers build config 00:04:16.369 regex/*: missing internal dependency, "regexdev" 00:04:16.369 ml/*: missing internal dependency, "mldev" 00:04:16.369 vdpa/ifc: not in enabled drivers build config 00:04:16.369 vdpa/mlx5: not in enabled drivers build config 00:04:16.369 vdpa/nfp: not in enabled drivers build config 00:04:16.369 vdpa/sfc: not in enabled drivers build config 00:04:16.369 event/*: missing internal dependency, "eventdev" 00:04:16.369 baseband/*: missing internal dependency, "bbdev" 00:04:16.369 gpu/*: missing internal dependency, "gpudev" 00:04:16.369 00:04:16.369 00:04:16.369 Build targets in project: 85 00:04:16.369 00:04:16.369 DPDK 24.03.0 00:04:16.369 00:04:16.369 User defined options 00:04:16.369 buildtype : debug 00:04:16.369 default_library : static 00:04:16.369 libdir : lib 00:04:16.369 prefix : /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:04:16.369 c_args : -fPIC -Werror 00:04:16.369 c_link_args : 00:04:16.369 cpu_instruction_set: native 00:04:16.369 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:16.369 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:16.369 enable_docs : false 00:04:16.369 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:04:16.369 enable_kmods : false 00:04:16.369 max_lcores : 128 00:04:16.369 tests : false 00:04:16.369 00:04:16.369 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:16.369 ninja: Entering directory `/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp' 00:04:16.369 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:16.369 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:16.369 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:16.369 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:16.369 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:16.369 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:16.369 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:16.369 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:16.369 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:16.369 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:16.369 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:16.369 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:16.369 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:16.369 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:16.369 [15/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:16.369 [16/268] Linking static target lib/librte_kvargs.a 00:04:16.370 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:16.370 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:16.370 [19/268] Linking static target lib/librte_log.a 00:04:16.370 [20/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:16.633 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:16.633 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:16.633 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:16.633 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:16.633 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:16.633 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:16.633 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:16.633 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:16.633 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:16.633 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:16.633 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:16.633 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:16.633 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:16.633 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:16.633 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:16.633 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:16.633 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:16.633 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:16.633 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:16.633 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:16.633 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:16.633 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:16.633 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:16.633 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:16.633 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:16.633 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:16.633 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:16.633 [48/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:16.633 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:16.633 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:16.633 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:16.633 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:16.633 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:16.633 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:16.633 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:16.633 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:16.633 [57/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:16.633 [58/268] Linking static target lib/librte_telemetry.a 00:04:16.633 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:16.633 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:16.633 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:16.633 [62/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:16.633 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:16.633 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:16.633 [65/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:16.633 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:16.633 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:16.633 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:16.633 [69/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:16.633 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:16.633 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:16.633 [72/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:16.633 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:16.633 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:16.633 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:16.633 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:16.633 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:16.633 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:16.633 [79/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.633 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:16.633 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:16.633 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:16.633 [83/268] Linking static target lib/librte_pci.a 00:04:16.633 [84/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:16.633 [85/268] Linking static target lib/librte_ring.a 00:04:16.633 [86/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:16.633 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:16.633 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:16.633 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:16.633 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:16.633 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:16.633 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:16.633 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:16.633 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:16.633 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:16.633 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:16.633 [97/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:16.633 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:16.633 [99/268] Linking static target lib/librte_rcu.a 00:04:16.633 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:16.633 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:16.633 [102/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:16.633 [103/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:16.894 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:16.894 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:16.894 [106/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:16.894 [107/268] Linking static target lib/librte_eal.a 00:04:16.894 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:16.894 [109/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:16.894 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:16.894 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:16.894 [112/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:16.894 [113/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:16.894 [114/268] Linking static target lib/librte_mempool.a 00:04:16.894 [115/268] Linking static target lib/librte_mbuf.a 00:04:16.894 [116/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:16.894 [117/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:16.894 [118/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:17.154 [119/268] Linking static target lib/librte_net.a 00:04:17.154 [120/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.154 [121/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.154 [122/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:17.154 [123/268] Linking static target lib/librte_meter.a 00:04:17.154 [124/268] Linking target lib/librte_log.so.24.1 00:04:17.155 [125/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:17.155 [126/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.155 [127/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:17.155 [128/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:17.155 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:17.155 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:17.155 [131/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:17.155 [132/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.155 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:17.155 [134/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:17.155 [135/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:17.155 [136/268] Linking static target lib/librte_timer.a 00:04:17.155 [137/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:17.155 [138/268] Linking static target lib/librte_cmdline.a 00:04:17.155 [139/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:17.155 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:17.155 [141/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:17.155 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:17.155 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:17.155 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:17.155 [145/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:17.155 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:17.155 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:17.155 [148/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:17.451 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:17.451 [150/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:17.451 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:17.451 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:17.451 [153/268] Linking static target lib/librte_compressdev.a 00:04:17.451 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:17.451 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:17.451 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:17.451 [157/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:17.451 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:17.451 [159/268] Linking static target lib/librte_dmadev.a 00:04:17.451 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:17.451 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:17.451 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:17.451 [163/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.451 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:17.451 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:17.451 [166/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:17.451 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:17.451 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:17.451 [169/268] Linking target lib/librte_kvargs.so.24.1 00:04:17.451 [170/268] Linking target lib/librte_telemetry.so.24.1 00:04:17.451 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:17.451 [172/268] Linking static target lib/librte_power.a 00:04:17.451 [173/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:17.451 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:17.451 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:17.451 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:17.451 [177/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.451 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:17.451 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:17.451 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:17.451 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:17.451 [182/268] Linking static target lib/librte_hash.a 00:04:17.451 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:17.451 [184/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:17.451 [185/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:17.451 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:17.451 [187/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:17.451 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:17.451 [189/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:17.451 [190/268] Linking static target lib/librte_reorder.a 00:04:17.451 [191/268] Linking static target lib/librte_security.a 00:04:17.451 [192/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:17.451 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:17.451 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:17.451 [195/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.734 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:17.734 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:17.734 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:17.734 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:17.734 [200/268] Linking static target drivers/librte_bus_vdev.a 00:04:17.734 [201/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.734 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:17.734 [203/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:17.734 [204/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:17.734 [205/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:17.734 [206/268] Linking static target lib/librte_cryptodev.a 00:04:17.734 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:17.734 [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.734 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:17.734 [210/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:17.734 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:17.734 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:17.734 [213/268] Linking static target drivers/librte_mempool_ring.a 00:04:17.734 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:17.734 [215/268] Linking static target drivers/librte_bus_pci.a 00:04:17.734 [216/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:17.997 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.997 [218/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.997 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.997 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:17.997 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:17.997 [222/268] Linking static target lib/librte_ethdev.a 00:04:17.997 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.255 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.255 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.514 [226/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.514 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:18.514 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:18.514 [229/268] Linking static target lib/librte_vhost.a 00:04:19.887 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:20.822 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:27.385 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.288 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:29.288 [234/268] Linking target lib/librte_eal.so.24.1 00:04:29.288 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:29.288 [236/268] Linking target lib/librte_timer.so.24.1 00:04:29.288 [237/268] Linking target lib/librte_ring.so.24.1 00:04:29.288 [238/268] Linking target lib/librte_dmadev.so.24.1 00:04:29.288 [239/268] Linking target lib/librte_pci.so.24.1 00:04:29.288 [240/268] Linking target lib/librte_meter.so.24.1 00:04:29.288 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:29.288 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:29.288 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:29.288 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:29.288 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:29.288 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:29.288 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:29.546 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:29.546 [249/268] Linking target lib/librte_rcu.so.24.1 00:04:29.546 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:29.546 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:29.546 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:29.546 [253/268] Linking target lib/librte_mbuf.so.24.1 00:04:29.805 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:29.805 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:04:29.805 [256/268] Linking target lib/librte_compressdev.so.24.1 00:04:29.805 [257/268] Linking target lib/librte_net.so.24.1 00:04:29.805 [258/268] Linking target lib/librte_reorder.so.24.1 00:04:30.063 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:30.063 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:30.063 [261/268] Linking target lib/librte_security.so.24.1 00:04:30.063 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:30.063 [263/268] Linking target lib/librte_cmdline.so.24.1 00:04:30.063 [264/268] Linking target lib/librte_hash.so.24.1 00:04:30.063 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:30.063 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:30.321 [267/268] Linking target lib/librte_power.so.24.1 00:04:30.321 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:30.321 INFO: autodetecting backend as ninja 00:04:30.321 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build-tmp -j 72 00:04:31.254 CC lib/log/log.o 00:04:31.254 CC lib/log/log_deprecated.o 00:04:31.254 CC lib/log/log_flags.o 00:04:31.254 CC lib/ut/ut.o 00:04:31.254 CC lib/ut_mock/mock.o 00:04:31.254 LIB libspdk_log.a 00:04:31.254 LIB libspdk_ut.a 00:04:31.254 LIB libspdk_ut_mock.a 00:04:31.527 CC lib/util/base64.o 00:04:31.527 CC lib/util/crc16.o 00:04:31.527 CC lib/util/bit_array.o 00:04:31.527 CC lib/util/cpuset.o 00:04:31.527 CC lib/util/crc32c.o 00:04:31.527 CC lib/util/crc32.o 00:04:31.527 CC lib/util/crc32_ieee.o 00:04:31.527 CC lib/util/crc64.o 00:04:31.527 CC lib/util/fd.o 00:04:31.527 CC lib/util/dif.o 00:04:31.527 CC lib/util/fd_group.o 00:04:31.527 CC lib/util/file.o 00:04:31.527 CC lib/util/hexlify.o 00:04:31.527 CC lib/util/iov.o 00:04:31.527 CC lib/util/math.o 00:04:31.527 CC lib/util/net.o 00:04:31.527 CC lib/util/pipe.o 00:04:31.527 CC lib/util/uuid.o 00:04:31.527 CC lib/util/strerror_tls.o 00:04:31.527 CC lib/util/string.o 00:04:31.527 CC lib/util/xor.o 00:04:31.527 CC lib/util/zipf.o 00:04:31.527 CC lib/util/md5.o 00:04:31.527 CC lib/dma/dma.o 00:04:31.527 CC lib/ioat/ioat.o 00:04:31.527 CXX lib/trace_parser/trace.o 00:04:31.785 CC lib/vfio_user/host/vfio_user_pci.o 00:04:31.785 CC lib/vfio_user/host/vfio_user.o 00:04:31.786 LIB libspdk_dma.a 00:04:31.786 LIB libspdk_ioat.a 00:04:31.786 LIB libspdk_vfio_user.a 00:04:32.044 LIB libspdk_util.a 00:04:32.044 LIB libspdk_trace_parser.a 00:04:32.302 CC lib/env_dpdk/env.o 00:04:32.302 CC lib/env_dpdk/memory.o 00:04:32.302 CC lib/env_dpdk/pci.o 00:04:32.302 CC lib/env_dpdk/init.o 00:04:32.302 CC lib/env_dpdk/threads.o 00:04:32.302 CC lib/env_dpdk/pci_ioat.o 00:04:32.302 CC lib/env_dpdk/pci_virtio.o 00:04:32.302 CC lib/env_dpdk/pci_vmd.o 00:04:32.302 CC lib/env_dpdk/pci_idxd.o 00:04:32.302 CC lib/env_dpdk/pci_event.o 00:04:32.302 CC lib/env_dpdk/sigbus_handler.o 00:04:32.302 CC lib/env_dpdk/pci_dpdk.o 00:04:32.302 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:32.302 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:32.302 CC lib/json/json_util.o 00:04:32.302 CC lib/json/json_parse.o 00:04:32.302 CC lib/json/json_write.o 00:04:32.302 CC lib/conf/conf.o 00:04:32.302 CC lib/rdma_utils/rdma_utils.o 00:04:32.302 CC lib/idxd/idxd.o 00:04:32.302 CC lib/idxd/idxd_user.o 00:04:32.302 CC lib/idxd/idxd_kernel.o 00:04:32.302 CC lib/vmd/vmd.o 00:04:32.302 CC lib/vmd/led.o 00:04:32.302 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:32.302 CC lib/rdma_provider/common.o 00:04:32.302 LIB libspdk_conf.a 00:04:32.302 LIB libspdk_rdma_provider.a 00:04:32.302 LIB libspdk_rdma_utils.a 00:04:32.302 LIB libspdk_json.a 00:04:32.561 LIB libspdk_idxd.a 00:04:32.561 LIB libspdk_vmd.a 00:04:32.561 CC lib/jsonrpc/jsonrpc_server.o 00:04:32.561 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:32.561 CC lib/jsonrpc/jsonrpc_client.o 00:04:32.561 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:32.819 LIB libspdk_jsonrpc.a 00:04:33.077 CC lib/rpc/rpc.o 00:04:33.077 LIB libspdk_env_dpdk.a 00:04:33.335 LIB libspdk_rpc.a 00:04:33.593 CC lib/trace/trace.o 00:04:33.593 CC lib/trace/trace_flags.o 00:04:33.593 CC lib/trace/trace_rpc.o 00:04:33.593 CC lib/notify/notify_rpc.o 00:04:33.593 CC lib/notify/notify.o 00:04:33.593 CC lib/keyring/keyring.o 00:04:33.593 CC lib/keyring/keyring_rpc.o 00:04:33.593 LIB libspdk_notify.a 00:04:33.851 LIB libspdk_trace.a 00:04:33.851 LIB libspdk_keyring.a 00:04:34.109 CC lib/thread/thread.o 00:04:34.109 CC lib/thread/iobuf.o 00:04:34.109 CC lib/sock/sock.o 00:04:34.109 CC lib/sock/sock_rpc.o 00:04:34.367 LIB libspdk_sock.a 00:04:34.625 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:34.625 CC lib/nvme/nvme_ctrlr.o 00:04:34.625 CC lib/nvme/nvme_fabric.o 00:04:34.625 CC lib/nvme/nvme_ns_cmd.o 00:04:34.625 CC lib/nvme/nvme_pcie.o 00:04:34.625 CC lib/nvme/nvme_ns.o 00:04:34.625 CC lib/nvme/nvme_pcie_common.o 00:04:34.625 CC lib/nvme/nvme_qpair.o 00:04:34.625 CC lib/nvme/nvme_quirks.o 00:04:34.625 CC lib/nvme/nvme.o 00:04:34.625 CC lib/nvme/nvme_transport.o 00:04:34.625 CC lib/nvme/nvme_discovery.o 00:04:34.625 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:34.626 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:34.626 CC lib/nvme/nvme_tcp.o 00:04:34.626 CC lib/nvme/nvme_io_msg.o 00:04:34.626 CC lib/nvme/nvme_opal.o 00:04:34.626 CC lib/nvme/nvme_poll_group.o 00:04:34.626 CC lib/nvme/nvme_zns.o 00:04:34.626 CC lib/nvme/nvme_stubs.o 00:04:34.626 CC lib/nvme/nvme_auth.o 00:04:34.626 CC lib/nvme/nvme_rdma.o 00:04:34.626 CC lib/nvme/nvme_cuse.o 00:04:34.626 CC lib/nvme/nvme_vfio_user.o 00:04:34.883 LIB libspdk_thread.a 00:04:35.141 CC lib/accel/accel_rpc.o 00:04:35.141 CC lib/accel/accel.o 00:04:35.141 CC lib/accel/accel_sw.o 00:04:35.141 CC lib/fsdev/fsdev.o 00:04:35.141 CC lib/fsdev/fsdev_io.o 00:04:35.141 CC lib/fsdev/fsdev_rpc.o 00:04:35.141 CC lib/virtio/virtio_vhost_user.o 00:04:35.141 CC lib/virtio/virtio.o 00:04:35.141 CC lib/virtio/virtio_pci.o 00:04:35.141 CC lib/virtio/virtio_vfio_user.o 00:04:35.141 CC lib/blob/blobstore.o 00:04:35.141 CC lib/blob/zeroes.o 00:04:35.141 CC lib/blob/request.o 00:04:35.141 CC lib/blob/blob_bs_dev.o 00:04:35.141 CC lib/vfu_tgt/tgt_endpoint.o 00:04:35.141 CC lib/init/json_config.o 00:04:35.141 CC lib/vfu_tgt/tgt_rpc.o 00:04:35.141 CC lib/init/subsystem.o 00:04:35.141 CC lib/init/rpc.o 00:04:35.141 CC lib/init/subsystem_rpc.o 00:04:35.141 LIB libspdk_init.a 00:04:35.399 LIB libspdk_virtio.a 00:04:35.399 LIB libspdk_vfu_tgt.a 00:04:35.399 LIB libspdk_fsdev.a 00:04:35.399 CC lib/event/app_rpc.o 00:04:35.399 CC lib/event/app.o 00:04:35.399 CC lib/event/scheduler_static.o 00:04:35.399 CC lib/event/reactor.o 00:04:35.657 CC lib/event/log_rpc.o 00:04:35.657 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:35.914 LIB libspdk_event.a 00:04:35.914 LIB libspdk_accel.a 00:04:35.914 LIB libspdk_nvme.a 00:04:36.173 CC lib/bdev/bdev_zone.o 00:04:36.173 CC lib/bdev/bdev.o 00:04:36.173 CC lib/bdev/bdev_rpc.o 00:04:36.173 CC lib/bdev/part.o 00:04:36.173 CC lib/bdev/scsi_nvme.o 00:04:36.173 LIB libspdk_fuse_dispatcher.a 00:04:36.741 LIB libspdk_blob.a 00:04:37.308 CC lib/blobfs/blobfs.o 00:04:37.308 CC lib/blobfs/tree.o 00:04:37.308 CC lib/lvol/lvol.o 00:04:37.566 LIB libspdk_lvol.a 00:04:37.823 LIB libspdk_blobfs.a 00:04:37.823 LIB libspdk_bdev.a 00:04:38.083 CC lib/nbd/nbd_rpc.o 00:04:38.083 CC lib/nbd/nbd.o 00:04:38.083 CC lib/ftl/ftl_layout.o 00:04:38.083 CC lib/ftl/ftl_core.o 00:04:38.083 CC lib/ftl/ftl_init.o 00:04:38.083 CC lib/ftl/ftl_l2p.o 00:04:38.083 CC lib/ftl/ftl_debug.o 00:04:38.083 CC lib/ftl/ftl_io.o 00:04:38.083 CC lib/ftl/ftl_sb.o 00:04:38.083 CC lib/ftl/ftl_l2p_flat.o 00:04:38.083 CC lib/ftl/ftl_nv_cache.o 00:04:38.083 CC lib/ftl/ftl_band.o 00:04:38.083 CC lib/ftl/ftl_band_ops.o 00:04:38.083 CC lib/ftl/ftl_writer.o 00:04:38.083 CC lib/ftl/ftl_rq.o 00:04:38.083 CC lib/ftl/ftl_l2p_cache.o 00:04:38.083 CC lib/ftl/ftl_reloc.o 00:04:38.083 CC lib/ftl/ftl_p2l.o 00:04:38.083 CC lib/ftl/mngt/ftl_mngt.o 00:04:38.083 CC lib/scsi/dev.o 00:04:38.083 CC lib/ftl/ftl_p2l_log.o 00:04:38.083 CC lib/scsi/port.o 00:04:38.083 CC lib/scsi/lun.o 00:04:38.083 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:38.342 CC lib/nvmf/ctrlr_bdev.o 00:04:38.342 CC lib/nvmf/ctrlr.o 00:04:38.342 CC lib/nvmf/ctrlr_discovery.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:38.342 CC lib/scsi/scsi.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:38.342 CC lib/nvmf/subsystem.o 00:04:38.342 CC lib/scsi/scsi_bdev.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:38.342 CC lib/scsi/scsi_pr.o 00:04:38.342 CC lib/nvmf/nvmf_rpc.o 00:04:38.342 CC lib/nvmf/nvmf.o 00:04:38.342 CC lib/ublk/ublk.o 00:04:38.342 CC lib/scsi/task.o 00:04:38.342 CC lib/nvmf/transport.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:38.342 CC lib/scsi/scsi_rpc.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:38.342 CC lib/nvmf/stubs.o 00:04:38.342 CC lib/nvmf/tcp.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:38.342 CC lib/nvmf/mdns_server.o 00:04:38.342 CC lib/ublk/ublk_rpc.o 00:04:38.342 CC lib/nvmf/vfio_user.o 00:04:38.342 CC lib/nvmf/auth.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:38.342 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:38.342 CC lib/nvmf/rdma.o 00:04:38.342 CC lib/ftl/utils/ftl_md.o 00:04:38.342 CC lib/ftl/utils/ftl_conf.o 00:04:38.342 CC lib/ftl/utils/ftl_mempool.o 00:04:38.342 CC lib/ftl/utils/ftl_bitmap.o 00:04:38.342 CC lib/ftl/utils/ftl_property.o 00:04:38.342 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:38.342 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:38.342 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:38.342 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:38.342 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:38.342 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:38.342 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:38.342 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:38.342 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:38.343 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:38.343 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:38.343 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:38.343 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:38.343 CC lib/ftl/base/ftl_base_dev.o 00:04:38.343 CC lib/ftl/base/ftl_base_bdev.o 00:04:38.343 CC lib/ftl/ftl_trace.o 00:04:38.602 LIB libspdk_nbd.a 00:04:38.602 LIB libspdk_scsi.a 00:04:38.861 LIB libspdk_ublk.a 00:04:38.861 CC lib/iscsi/conn.o 00:04:38.861 CC lib/vhost/vhost.o 00:04:39.119 CC lib/iscsi/portal_grp.o 00:04:39.119 CC lib/vhost/vhost_rpc.o 00:04:39.119 CC lib/iscsi/init_grp.o 00:04:39.119 CC lib/iscsi/iscsi.o 00:04:39.119 CC lib/iscsi/param.o 00:04:39.119 CC lib/vhost/vhost_scsi.o 00:04:39.119 CC lib/iscsi/iscsi_rpc.o 00:04:39.119 CC lib/iscsi/tgt_node.o 00:04:39.119 CC lib/vhost/vhost_blk.o 00:04:39.119 CC lib/iscsi/iscsi_subsystem.o 00:04:39.119 CC lib/vhost/rte_vhost_user.o 00:04:39.119 LIB libspdk_ftl.a 00:04:39.119 CC lib/iscsi/task.o 00:04:39.684 LIB libspdk_nvmf.a 00:04:39.684 LIB libspdk_vhost.a 00:04:39.684 LIB libspdk_iscsi.a 00:04:40.249 CC module/vfu_device/vfu_virtio_blk.o 00:04:40.249 CC module/vfu_device/vfu_virtio.o 00:04:40.249 CC module/vfu_device/vfu_virtio_scsi.o 00:04:40.249 CC module/vfu_device/vfu_virtio_fs.o 00:04:40.249 CC module/vfu_device/vfu_virtio_rpc.o 00:04:40.249 CC module/env_dpdk/env_dpdk_rpc.o 00:04:40.249 CC module/blob/bdev/blob_bdev.o 00:04:40.249 CC module/keyring/linux/keyring.o 00:04:40.249 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:40.249 CC module/keyring/linux/keyring_rpc.o 00:04:40.250 CC module/sock/posix/posix.o 00:04:40.250 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:40.250 CC module/accel/error/accel_error_rpc.o 00:04:40.250 CC module/accel/error/accel_error.o 00:04:40.250 CC module/keyring/file/keyring.o 00:04:40.250 CC module/keyring/file/keyring_rpc.o 00:04:40.250 LIB libspdk_env_dpdk_rpc.a 00:04:40.250 CC module/accel/iaa/accel_iaa_rpc.o 00:04:40.250 CC module/accel/iaa/accel_iaa.o 00:04:40.250 CC module/accel/dsa/accel_dsa.o 00:04:40.250 CC module/accel/dsa/accel_dsa_rpc.o 00:04:40.250 CC module/accel/ioat/accel_ioat_rpc.o 00:04:40.250 CC module/accel/ioat/accel_ioat.o 00:04:40.250 CC module/fsdev/aio/fsdev_aio.o 00:04:40.250 CC module/scheduler/gscheduler/gscheduler.o 00:04:40.250 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:40.250 CC module/fsdev/aio/linux_aio_mgr.o 00:04:40.507 LIB libspdk_keyring_linux.a 00:04:40.507 LIB libspdk_scheduler_dpdk_governor.a 00:04:40.507 LIB libspdk_keyring_file.a 00:04:40.507 LIB libspdk_accel_error.a 00:04:40.507 LIB libspdk_scheduler_gscheduler.a 00:04:40.507 LIB libspdk_scheduler_dynamic.a 00:04:40.507 LIB libspdk_accel_iaa.a 00:04:40.507 LIB libspdk_accel_ioat.a 00:04:40.507 LIB libspdk_blob_bdev.a 00:04:40.507 LIB libspdk_accel_dsa.a 00:04:40.766 LIB libspdk_vfu_device.a 00:04:40.766 LIB libspdk_sock_posix.a 00:04:40.766 LIB libspdk_fsdev_aio.a 00:04:40.766 CC module/blobfs/bdev/blobfs_bdev.o 00:04:40.766 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:40.766 CC module/bdev/delay/vbdev_delay.o 00:04:40.766 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:41.025 CC module/bdev/error/vbdev_error.o 00:04:41.025 CC module/bdev/split/vbdev_split.o 00:04:41.025 CC module/bdev/error/vbdev_error_rpc.o 00:04:41.025 CC module/bdev/split/vbdev_split_rpc.o 00:04:41.025 CC module/bdev/null/bdev_null_rpc.o 00:04:41.025 CC module/bdev/null/bdev_null.o 00:04:41.025 CC module/bdev/malloc/bdev_malloc.o 00:04:41.025 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:41.025 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:41.025 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:41.025 CC module/bdev/aio/bdev_aio_rpc.o 00:04:41.025 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:41.025 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:41.025 CC module/bdev/aio/bdev_aio.o 00:04:41.025 CC module/bdev/gpt/vbdev_gpt.o 00:04:41.025 CC module/bdev/nvme/bdev_nvme.o 00:04:41.025 CC module/bdev/lvol/vbdev_lvol.o 00:04:41.025 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:41.025 CC module/bdev/gpt/gpt.o 00:04:41.025 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:41.025 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:41.025 CC module/bdev/nvme/nvme_rpc.o 00:04:41.025 CC module/bdev/nvme/bdev_mdns_client.o 00:04:41.025 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:41.025 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:41.025 CC module/bdev/nvme/vbdev_opal.o 00:04:41.025 CC module/bdev/ftl/bdev_ftl.o 00:04:41.025 CC module/bdev/passthru/vbdev_passthru.o 00:04:41.025 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:41.025 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:41.025 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:41.025 CC module/bdev/iscsi/bdev_iscsi.o 00:04:41.025 CC module/bdev/raid/bdev_raid.o 00:04:41.025 CC module/bdev/raid/bdev_raid_rpc.o 00:04:41.025 CC module/bdev/raid/bdev_raid_sb.o 00:04:41.025 CC module/bdev/raid/raid0.o 00:04:41.025 CC module/bdev/raid/raid1.o 00:04:41.025 CC module/bdev/raid/concat.o 00:04:41.025 LIB libspdk_blobfs_bdev.a 00:04:41.025 LIB libspdk_bdev_error.a 00:04:41.025 LIB libspdk_bdev_gpt.a 00:04:41.025 LIB libspdk_bdev_null.a 00:04:41.025 LIB libspdk_bdev_ftl.a 00:04:41.284 LIB libspdk_bdev_aio.a 00:04:41.284 LIB libspdk_bdev_split.a 00:04:41.284 LIB libspdk_bdev_zone_block.a 00:04:41.284 LIB libspdk_bdev_iscsi.a 00:04:41.284 LIB libspdk_bdev_delay.a 00:04:41.284 LIB libspdk_bdev_passthru.a 00:04:41.284 LIB libspdk_bdev_lvol.a 00:04:41.284 LIB libspdk_bdev_malloc.a 00:04:41.284 LIB libspdk_bdev_virtio.a 00:04:41.544 LIB libspdk_bdev_raid.a 00:04:42.481 LIB libspdk_bdev_nvme.a 00:04:42.740 CC module/event/subsystems/vmd/vmd.o 00:04:42.740 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:42.740 CC module/event/subsystems/sock/sock.o 00:04:42.740 CC module/event/subsystems/keyring/keyring.o 00:04:42.740 CC module/event/subsystems/scheduler/scheduler.o 00:04:42.740 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:42.740 CC module/event/subsystems/iobuf/iobuf.o 00:04:42.740 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:42.740 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:42.740 CC module/event/subsystems/fsdev/fsdev.o 00:04:42.998 LIB libspdk_event_vmd.a 00:04:42.998 LIB libspdk_event_keyring.a 00:04:42.998 LIB libspdk_event_sock.a 00:04:42.998 LIB libspdk_event_vfu_tgt.a 00:04:42.998 LIB libspdk_event_scheduler.a 00:04:42.998 LIB libspdk_event_vhost_blk.a 00:04:42.998 LIB libspdk_event_iobuf.a 00:04:42.998 LIB libspdk_event_fsdev.a 00:04:43.257 CC module/event/subsystems/accel/accel.o 00:04:43.257 LIB libspdk_event_accel.a 00:04:43.823 CC module/event/subsystems/bdev/bdev.o 00:04:43.823 LIB libspdk_event_bdev.a 00:04:44.081 CC module/event/subsystems/ublk/ublk.o 00:04:44.081 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:44.081 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:44.081 CC module/event/subsystems/nbd/nbd.o 00:04:44.081 CC module/event/subsystems/scsi/scsi.o 00:04:44.081 LIB libspdk_event_ublk.a 00:04:44.340 LIB libspdk_event_nbd.a 00:04:44.340 LIB libspdk_event_scsi.a 00:04:44.340 LIB libspdk_event_nvmf.a 00:04:44.598 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:44.598 CC module/event/subsystems/iscsi/iscsi.o 00:04:44.598 LIB libspdk_event_vhost_scsi.a 00:04:44.598 LIB libspdk_event_iscsi.a 00:04:44.856 CC app/spdk_nvme_perf/perf.o 00:04:44.856 CC app/spdk_top/spdk_top.o 00:04:44.856 CC app/spdk_nvme_discover/discovery_aer.o 00:04:44.856 CC app/spdk_nvme_identify/identify.o 00:04:44.856 CC app/spdk_lspci/spdk_lspci.o 00:04:44.856 CC app/trace_record/trace_record.o 00:04:44.856 CXX app/trace/trace.o 00:04:44.857 CC test/rpc_client/rpc_client_test.o 00:04:44.857 CC app/spdk_dd/spdk_dd.o 00:04:44.857 CC app/iscsi_tgt/iscsi_tgt.o 00:04:44.857 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:44.857 TEST_HEADER include/spdk/accel.h 00:04:44.857 TEST_HEADER include/spdk/accel_module.h 00:04:44.857 TEST_HEADER include/spdk/base64.h 00:04:44.857 TEST_HEADER include/spdk/assert.h 00:04:44.857 TEST_HEADER include/spdk/bdev.h 00:04:44.857 TEST_HEADER include/spdk/barrier.h 00:04:44.857 CC app/nvmf_tgt/nvmf_main.o 00:04:44.857 TEST_HEADER include/spdk/bdev_module.h 00:04:44.857 TEST_HEADER include/spdk/bit_array.h 00:04:44.857 TEST_HEADER include/spdk/bit_pool.h 00:04:44.857 TEST_HEADER include/spdk/bdev_zone.h 00:04:44.857 TEST_HEADER include/spdk/blob_bdev.h 00:04:44.857 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:44.857 TEST_HEADER include/spdk/blobfs.h 00:04:44.857 TEST_HEADER include/spdk/blob.h 00:04:44.857 TEST_HEADER include/spdk/config.h 00:04:45.122 TEST_HEADER include/spdk/cpuset.h 00:04:45.122 TEST_HEADER include/spdk/conf.h 00:04:45.122 TEST_HEADER include/spdk/crc16.h 00:04:45.122 TEST_HEADER include/spdk/crc32.h 00:04:45.122 TEST_HEADER include/spdk/crc64.h 00:04:45.122 TEST_HEADER include/spdk/dif.h 00:04:45.122 TEST_HEADER include/spdk/dma.h 00:04:45.122 TEST_HEADER include/spdk/endian.h 00:04:45.122 CC app/spdk_tgt/spdk_tgt.o 00:04:45.122 TEST_HEADER include/spdk/env.h 00:04:45.122 TEST_HEADER include/spdk/env_dpdk.h 00:04:45.122 TEST_HEADER include/spdk/fd_group.h 00:04:45.122 TEST_HEADER include/spdk/event.h 00:04:45.122 TEST_HEADER include/spdk/fd.h 00:04:45.122 TEST_HEADER include/spdk/file.h 00:04:45.122 TEST_HEADER include/spdk/fsdev.h 00:04:45.122 TEST_HEADER include/spdk/fsdev_module.h 00:04:45.122 TEST_HEADER include/spdk/ftl.h 00:04:45.122 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:45.122 TEST_HEADER include/spdk/gpt_spec.h 00:04:45.122 TEST_HEADER include/spdk/hexlify.h 00:04:45.122 TEST_HEADER include/spdk/histogram_data.h 00:04:45.122 TEST_HEADER include/spdk/idxd.h 00:04:45.122 TEST_HEADER include/spdk/idxd_spec.h 00:04:45.122 TEST_HEADER include/spdk/init.h 00:04:45.122 TEST_HEADER include/spdk/ioat.h 00:04:45.122 TEST_HEADER include/spdk/ioat_spec.h 00:04:45.122 TEST_HEADER include/spdk/iscsi_spec.h 00:04:45.122 TEST_HEADER include/spdk/json.h 00:04:45.122 TEST_HEADER include/spdk/jsonrpc.h 00:04:45.122 TEST_HEADER include/spdk/keyring.h 00:04:45.122 TEST_HEADER include/spdk/likely.h 00:04:45.122 TEST_HEADER include/spdk/keyring_module.h 00:04:45.122 TEST_HEADER include/spdk/log.h 00:04:45.122 TEST_HEADER include/spdk/lvol.h 00:04:45.122 TEST_HEADER include/spdk/md5.h 00:04:45.122 TEST_HEADER include/spdk/memory.h 00:04:45.122 TEST_HEADER include/spdk/mmio.h 00:04:45.122 TEST_HEADER include/spdk/nbd.h 00:04:45.122 TEST_HEADER include/spdk/net.h 00:04:45.122 TEST_HEADER include/spdk/notify.h 00:04:45.122 TEST_HEADER include/spdk/nvme.h 00:04:45.122 TEST_HEADER include/spdk/nvme_intel.h 00:04:45.122 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:45.122 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:45.122 TEST_HEADER include/spdk/nvme_spec.h 00:04:45.122 TEST_HEADER include/spdk/nvme_zns.h 00:04:45.122 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:45.122 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:45.122 TEST_HEADER include/spdk/nvmf.h 00:04:45.122 TEST_HEADER include/spdk/nvmf_spec.h 00:04:45.122 TEST_HEADER include/spdk/nvmf_transport.h 00:04:45.122 TEST_HEADER include/spdk/opal.h 00:04:45.122 TEST_HEADER include/spdk/opal_spec.h 00:04:45.122 TEST_HEADER include/spdk/pci_ids.h 00:04:45.122 TEST_HEADER include/spdk/pipe.h 00:04:45.122 TEST_HEADER include/spdk/queue.h 00:04:45.122 TEST_HEADER include/spdk/reduce.h 00:04:45.122 TEST_HEADER include/spdk/rpc.h 00:04:45.122 TEST_HEADER include/spdk/scheduler.h 00:04:45.122 TEST_HEADER include/spdk/scsi.h 00:04:45.122 TEST_HEADER include/spdk/scsi_spec.h 00:04:45.122 TEST_HEADER include/spdk/stdinc.h 00:04:45.122 TEST_HEADER include/spdk/sock.h 00:04:45.122 TEST_HEADER include/spdk/thread.h 00:04:45.122 TEST_HEADER include/spdk/string.h 00:04:45.122 TEST_HEADER include/spdk/trace.h 00:04:45.122 TEST_HEADER include/spdk/trace_parser.h 00:04:45.122 TEST_HEADER include/spdk/tree.h 00:04:45.122 TEST_HEADER include/spdk/ublk.h 00:04:45.122 TEST_HEADER include/spdk/util.h 00:04:45.122 TEST_HEADER include/spdk/uuid.h 00:04:45.122 TEST_HEADER include/spdk/version.h 00:04:45.122 CC examples/util/zipf/zipf.o 00:04:45.122 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:45.122 TEST_HEADER include/spdk/vhost.h 00:04:45.122 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:45.122 TEST_HEADER include/spdk/vmd.h 00:04:45.122 TEST_HEADER include/spdk/xor.h 00:04:45.122 TEST_HEADER include/spdk/zipf.h 00:04:45.122 CXX test/cpp_headers/accel.o 00:04:45.122 CXX test/cpp_headers/accel_module.o 00:04:45.122 CXX test/cpp_headers/assert.o 00:04:45.122 CXX test/cpp_headers/barrier.o 00:04:45.122 CXX test/cpp_headers/base64.o 00:04:45.122 CC examples/ioat/perf/perf.o 00:04:45.122 CXX test/cpp_headers/bdev_module.o 00:04:45.122 CXX test/cpp_headers/bdev.o 00:04:45.122 CXX test/cpp_headers/bdev_zone.o 00:04:45.122 CXX test/cpp_headers/bit_pool.o 00:04:45.122 CC app/fio/nvme/fio_plugin.o 00:04:45.122 CXX test/cpp_headers/bit_array.o 00:04:45.122 CXX test/cpp_headers/blob_bdev.o 00:04:45.122 CXX test/cpp_headers/blobfs_bdev.o 00:04:45.122 CC test/thread/lock/spdk_lock.o 00:04:45.123 CXX test/cpp_headers/blobfs.o 00:04:45.123 CXX test/cpp_headers/blob.o 00:04:45.123 CXX test/cpp_headers/conf.o 00:04:45.123 CXX test/cpp_headers/config.o 00:04:45.123 CXX test/cpp_headers/cpuset.o 00:04:45.123 CXX test/cpp_headers/crc16.o 00:04:45.123 CXX test/cpp_headers/crc32.o 00:04:45.123 CXX test/cpp_headers/crc64.o 00:04:45.123 CXX test/cpp_headers/dif.o 00:04:45.123 CXX test/cpp_headers/dma.o 00:04:45.123 CXX test/cpp_headers/endian.o 00:04:45.123 CXX test/cpp_headers/env_dpdk.o 00:04:45.123 CXX test/cpp_headers/env.o 00:04:45.123 CXX test/cpp_headers/event.o 00:04:45.123 CXX test/cpp_headers/fd_group.o 00:04:45.123 CC test/thread/poller_perf/poller_perf.o 00:04:45.123 CXX test/cpp_headers/fd.o 00:04:45.123 CXX test/cpp_headers/file.o 00:04:45.123 CXX test/cpp_headers/fsdev.o 00:04:45.123 CXX test/cpp_headers/fsdev_module.o 00:04:45.123 CXX test/cpp_headers/ftl.o 00:04:45.123 CXX test/cpp_headers/fuse_dispatcher.o 00:04:45.123 CXX test/cpp_headers/gpt_spec.o 00:04:45.123 CXX test/cpp_headers/hexlify.o 00:04:45.123 CXX test/cpp_headers/histogram_data.o 00:04:45.123 CC examples/ioat/verify/verify.o 00:04:45.123 CC test/env/vtophys/vtophys.o 00:04:45.123 CXX test/cpp_headers/idxd.o 00:04:45.123 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:45.123 CXX test/cpp_headers/idxd_spec.o 00:04:45.123 CXX test/cpp_headers/init.o 00:04:45.123 CXX test/cpp_headers/ioat.o 00:04:45.123 CXX test/cpp_headers/ioat_spec.o 00:04:45.123 CC test/env/pci/pci_ut.o 00:04:45.123 CC test/app/histogram_perf/histogram_perf.o 00:04:45.123 CC test/env/memory/memory_ut.o 00:04:45.123 CC test/app/jsoncat/jsoncat.o 00:04:45.123 CC test/app/stub/stub.o 00:04:45.123 LINK spdk_lspci 00:04:45.123 CXX test/cpp_headers/iscsi_spec.o 00:04:45.123 CC app/fio/bdev/fio_plugin.o 00:04:45.123 CC test/app/bdev_svc/bdev_svc.o 00:04:45.123 CC test/dma/test_dma/test_dma.o 00:04:45.123 LINK rpc_client_test 00:04:45.123 CC test/env/mem_callbacks/mem_callbacks.o 00:04:45.123 LINK spdk_nvme_discover 00:04:45.123 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:45.123 LINK interrupt_tgt 00:04:45.123 LINK spdk_trace_record 00:04:45.123 LINK nvmf_tgt 00:04:45.123 LINK zipf 00:04:45.123 LINK jsoncat 00:04:45.123 LINK vtophys 00:04:45.123 LINK poller_perf 00:04:45.385 CXX test/cpp_headers/json.o 00:04:45.385 CXX test/cpp_headers/jsonrpc.o 00:04:45.385 CXX test/cpp_headers/keyring.o 00:04:45.385 LINK histogram_perf 00:04:45.385 CXX test/cpp_headers/keyring_module.o 00:04:45.385 CXX test/cpp_headers/likely.o 00:04:45.385 CXX test/cpp_headers/log.o 00:04:45.385 CXX test/cpp_headers/lvol.o 00:04:45.385 CXX test/cpp_headers/md5.o 00:04:45.385 CXX test/cpp_headers/memory.o 00:04:45.385 CXX test/cpp_headers/mmio.o 00:04:45.385 LINK iscsi_tgt 00:04:45.385 CXX test/cpp_headers/nbd.o 00:04:45.385 CXX test/cpp_headers/net.o 00:04:45.385 CXX test/cpp_headers/notify.o 00:04:45.385 CXX test/cpp_headers/nvme.o 00:04:45.385 LINK env_dpdk_post_init 00:04:45.385 CXX test/cpp_headers/nvme_intel.o 00:04:45.385 CXX test/cpp_headers/nvme_ocssd.o 00:04:45.385 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:45.385 CXX test/cpp_headers/nvme_spec.o 00:04:45.385 CXX test/cpp_headers/nvme_zns.o 00:04:45.385 CXX test/cpp_headers/nvmf_cmd.o 00:04:45.385 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:45.385 CXX test/cpp_headers/nvmf_spec.o 00:04:45.385 CXX test/cpp_headers/nvmf.o 00:04:45.385 CXX test/cpp_headers/nvmf_transport.o 00:04:45.386 CXX test/cpp_headers/opal.o 00:04:45.386 CXX test/cpp_headers/opal_spec.o 00:04:45.386 CXX test/cpp_headers/pci_ids.o 00:04:45.386 CXX test/cpp_headers/pipe.o 00:04:45.386 CXX test/cpp_headers/queue.o 00:04:45.386 CXX test/cpp_headers/reduce.o 00:04:45.386 CXX test/cpp_headers/rpc.o 00:04:45.386 CXX test/cpp_headers/scheduler.o 00:04:45.386 CXX test/cpp_headers/scsi.o 00:04:45.386 CXX test/cpp_headers/scsi_spec.o 00:04:45.386 LINK spdk_tgt 00:04:45.386 CXX test/cpp_headers/sock.o 00:04:45.386 CXX test/cpp_headers/stdinc.o 00:04:45.386 CXX test/cpp_headers/string.o 00:04:45.386 LINK stub 00:04:45.386 LINK ioat_perf 00:04:45.386 CXX test/cpp_headers/thread.o 00:04:45.386 LINK verify 00:04:45.386 CXX test/cpp_headers/trace.o 00:04:45.386 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:45.386 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:45.386 LINK bdev_svc 00:04:45.386 LINK spdk_trace 00:04:45.386 CC test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.o 00:04:45.386 CC test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.o 00:04:45.386 CXX test/cpp_headers/trace_parser.o 00:04:45.386 CXX test/cpp_headers/tree.o 00:04:45.386 CXX test/cpp_headers/ublk.o 00:04:45.386 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:45.386 CXX test/cpp_headers/util.o 00:04:45.386 CXX test/cpp_headers/uuid.o 00:04:45.386 CXX test/cpp_headers/version.o 00:04:45.386 CXX test/cpp_headers/vfio_user_pci.o 00:04:45.386 CXX test/cpp_headers/vfio_user_spec.o 00:04:45.386 CXX test/cpp_headers/vhost.o 00:04:45.386 CXX test/cpp_headers/vmd.o 00:04:45.386 CXX test/cpp_headers/xor.o 00:04:45.386 CXX test/cpp_headers/zipf.o 00:04:45.644 LINK spdk_dd 00:04:45.644 LINK pci_ut 00:04:45.644 LINK test_dma 00:04:45.644 LINK nvme_fuzz 00:04:45.644 LINK spdk_nvme 00:04:45.644 LINK mem_callbacks 00:04:45.644 LINK spdk_bdev 00:04:45.644 LINK spdk_nvme_identify 00:04:45.644 LINK llvm_vfio_fuzz 00:04:45.903 LINK spdk_nvme_perf 00:04:45.903 LINK vhost_fuzz 00:04:45.903 CC examples/vmd/led/led.o 00:04:45.903 CC examples/vmd/lsvmd/lsvmd.o 00:04:45.903 CC examples/idxd/perf/perf.o 00:04:45.903 CC examples/sock/hello_world/hello_sock.o 00:04:45.903 CC examples/thread/thread/thread_ex.o 00:04:45.903 LINK spdk_top 00:04:45.903 LINK lsvmd 00:04:45.903 LINK led 00:04:45.903 LINK llvm_nvme_fuzz 00:04:45.903 CC app/vhost/vhost.o 00:04:46.161 LINK hello_sock 00:04:46.161 LINK idxd_perf 00:04:46.161 LINK thread 00:04:46.161 LINK memory_ut 00:04:46.161 LINK vhost 00:04:46.418 LINK spdk_lock 00:04:46.675 LINK iscsi_fuzz 00:04:46.675 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:46.675 CC examples/nvme/arbitration/arbitration.o 00:04:46.675 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:46.675 CC examples/nvme/reconnect/reconnect.o 00:04:46.675 CC examples/nvme/abort/abort.o 00:04:46.675 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:46.675 CC examples/nvme/hello_world/hello_world.o 00:04:46.675 CC examples/nvme/hotplug/hotplug.o 00:04:46.932 LINK pmr_persistence 00:04:46.932 CC test/event/event_perf/event_perf.o 00:04:46.932 CC test/event/reactor_perf/reactor_perf.o 00:04:46.932 CC test/event/reactor/reactor.o 00:04:46.932 LINK cmb_copy 00:04:46.932 LINK hello_world 00:04:46.932 CC test/event/app_repeat/app_repeat.o 00:04:46.932 LINK hotplug 00:04:46.932 CC test/event/scheduler/scheduler.o 00:04:46.932 LINK reconnect 00:04:46.932 LINK arbitration 00:04:46.932 LINK abort 00:04:46.932 LINK event_perf 00:04:46.932 LINK reactor 00:04:46.932 LINK reactor_perf 00:04:47.190 LINK nvme_manage 00:04:47.190 LINK app_repeat 00:04:47.190 LINK scheduler 00:04:47.448 CC test/nvme/aer/aer.o 00:04:47.448 CC test/nvme/sgl/sgl.o 00:04:47.448 CC test/nvme/compliance/nvme_compliance.o 00:04:47.448 CC test/nvme/simple_copy/simple_copy.o 00:04:47.448 CC test/nvme/err_injection/err_injection.o 00:04:47.448 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:47.448 CC test/nvme/startup/startup.o 00:04:47.448 CC test/nvme/e2edp/nvme_dp.o 00:04:47.448 CC test/nvme/overhead/overhead.o 00:04:47.448 CC test/nvme/fdp/fdp.o 00:04:47.448 CC test/nvme/fused_ordering/fused_ordering.o 00:04:47.448 CC test/nvme/cuse/cuse.o 00:04:47.448 CC test/nvme/reset/reset.o 00:04:47.448 CC test/nvme/reserve/reserve.o 00:04:47.448 CC test/nvme/boot_partition/boot_partition.o 00:04:47.448 CC test/nvme/connect_stress/connect_stress.o 00:04:47.448 CC test/accel/dif/dif.o 00:04:47.448 CC test/lvol/esnap/esnap.o 00:04:47.448 CC test/blobfs/mkfs/mkfs.o 00:04:47.448 LINK startup 00:04:47.448 LINK boot_partition 00:04:47.448 LINK doorbell_aers 00:04:47.449 LINK err_injection 00:04:47.449 LINK reserve 00:04:47.449 LINK connect_stress 00:04:47.449 LINK fused_ordering 00:04:47.707 LINK simple_copy 00:04:47.707 LINK sgl 00:04:47.707 LINK nvme_dp 00:04:47.707 LINK reset 00:04:47.707 LINK fdp 00:04:47.707 LINK aer 00:04:47.707 LINK overhead 00:04:47.707 LINK mkfs 00:04:47.707 LINK nvme_compliance 00:04:47.707 CC examples/accel/perf/accel_perf.o 00:04:47.707 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:47.708 CC examples/blob/cli/blobcli.o 00:04:47.708 CC examples/blob/hello_world/hello_blob.o 00:04:47.966 LINK dif 00:04:47.966 LINK hello_fsdev 00:04:47.966 LINK hello_blob 00:04:47.966 LINK accel_perf 00:04:48.224 LINK blobcli 00:04:48.224 LINK cuse 00:04:48.791 CC examples/bdev/hello_world/hello_bdev.o 00:04:48.791 CC examples/bdev/bdevperf/bdevperf.o 00:04:49.049 LINK hello_bdev 00:04:49.307 LINK bdevperf 00:04:49.566 CC test/bdev/bdevio/bdevio.o 00:04:49.824 LINK bdevio 00:04:50.761 CC examples/nvmf/nvmf/nvmf.o 00:04:51.020 LINK esnap 00:04:51.020 LINK nvmf 00:04:52.401 00:04:52.401 real 0m45.358s 00:04:52.401 user 6m51.517s 00:04:52.401 sys 2m16.696s 00:04:52.401 01:42:21 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:52.401 01:42:21 make -- common/autotest_common.sh@10 -- $ set +x 00:04:52.401 ************************************ 00:04:52.401 END TEST make 00:04:52.401 ************************************ 00:04:52.401 01:42:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:52.401 01:42:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:52.401 01:42:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:52.401 01:42:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.401 01:42:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:52.401 01:42:21 -- pm/common@44 -- $ pid=3920006 00:04:52.401 01:42:21 -- pm/common@50 -- $ kill -TERM 3920006 00:04:52.401 01:42:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.401 01:42:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:52.401 01:42:21 -- pm/common@44 -- $ pid=3920008 00:04:52.401 01:42:21 -- pm/common@50 -- $ kill -TERM 3920008 00:04:52.401 01:42:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.401 01:42:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:52.401 01:42:21 -- pm/common@44 -- $ pid=3920010 00:04:52.401 01:42:21 -- pm/common@50 -- $ kill -TERM 3920010 00:04:52.401 01:42:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.401 01:42:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:52.401 01:42:21 -- pm/common@44 -- $ pid=3920037 00:04:52.401 01:42:21 -- pm/common@50 -- $ sudo -E kill -TERM 3920037 00:04:52.401 01:42:21 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:52.401 01:42:21 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:52.401 01:42:21 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:52.401 01:42:22 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:52.401 01:42:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.401 01:42:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.401 01:42:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.401 01:42:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.401 01:42:22 -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.401 01:42:22 -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.401 01:42:22 -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.401 01:42:22 -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.401 01:42:22 -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.401 01:42:22 -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.401 01:42:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.401 01:42:22 -- scripts/common.sh@344 -- # case "$op" in 00:04:52.402 01:42:22 -- scripts/common.sh@345 -- # : 1 00:04:52.402 01:42:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.402 01:42:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.402 01:42:22 -- scripts/common.sh@365 -- # decimal 1 00:04:52.402 01:42:22 -- scripts/common.sh@353 -- # local d=1 00:04:52.402 01:42:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.402 01:42:22 -- scripts/common.sh@355 -- # echo 1 00:04:52.402 01:42:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.402 01:42:22 -- scripts/common.sh@366 -- # decimal 2 00:04:52.402 01:42:22 -- scripts/common.sh@353 -- # local d=2 00:04:52.402 01:42:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.402 01:42:22 -- scripts/common.sh@355 -- # echo 2 00:04:52.402 01:42:22 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.402 01:42:22 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.402 01:42:22 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.402 01:42:22 -- scripts/common.sh@368 -- # return 0 00:04:52.402 01:42:22 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.402 01:42:22 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.402 --rc genhtml_branch_coverage=1 00:04:52.402 --rc genhtml_function_coverage=1 00:04:52.402 --rc genhtml_legend=1 00:04:52.402 --rc geninfo_all_blocks=1 00:04:52.402 --rc geninfo_unexecuted_blocks=1 00:04:52.402 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:52.402 ' 00:04:52.402 01:42:22 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.402 --rc genhtml_branch_coverage=1 00:04:52.402 --rc genhtml_function_coverage=1 00:04:52.402 --rc genhtml_legend=1 00:04:52.402 --rc geninfo_all_blocks=1 00:04:52.402 --rc geninfo_unexecuted_blocks=1 00:04:52.402 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:52.402 ' 00:04:52.402 01:42:22 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.402 --rc genhtml_branch_coverage=1 00:04:52.402 --rc genhtml_function_coverage=1 00:04:52.402 --rc genhtml_legend=1 00:04:52.402 --rc geninfo_all_blocks=1 00:04:52.402 --rc geninfo_unexecuted_blocks=1 00:04:52.402 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:52.402 ' 00:04:52.402 01:42:22 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:52.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.402 --rc genhtml_branch_coverage=1 00:04:52.402 --rc genhtml_function_coverage=1 00:04:52.402 --rc genhtml_legend=1 00:04:52.402 --rc geninfo_all_blocks=1 00:04:52.402 --rc geninfo_unexecuted_blocks=1 00:04:52.402 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:04:52.402 ' 00:04:52.402 01:42:22 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:04:52.402 01:42:22 -- nvmf/common.sh@7 -- # uname -s 00:04:52.402 01:42:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.402 01:42:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.402 01:42:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.402 01:42:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.402 01:42:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.402 01:42:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.402 01:42:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.402 01:42:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.402 01:42:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.402 01:42:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.402 01:42:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:04:52.402 01:42:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:04:52.402 01:42:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.402 01:42:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.402 01:42:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.402 01:42:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.402 01:42:22 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:04:52.402 01:42:22 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.402 01:42:22 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.402 01:42:22 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.402 01:42:22 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.402 01:42:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.402 01:42:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.402 01:42:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.402 01:42:22 -- paths/export.sh@5 -- # export PATH 00:04:52.402 01:42:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.402 01:42:22 -- nvmf/common.sh@51 -- # : 0 00:04:52.402 01:42:22 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.402 01:42:22 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.402 01:42:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.402 01:42:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.402 01:42:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.402 01:42:22 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.402 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.402 01:42:22 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.402 01:42:22 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.402 01:42:22 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.402 01:42:22 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:52.402 01:42:22 -- spdk/autotest.sh@32 -- # uname -s 00:04:52.402 01:42:22 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:52.402 01:42:22 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:52.402 01:42:22 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:04:52.402 01:42:22 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:52.402 01:42:22 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/coredumps 00:04:52.402 01:42:22 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:52.402 01:42:22 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:52.402 01:42:22 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:52.402 01:42:22 -- spdk/autotest.sh@48 -- # udevadm_pid=3979101 00:04:52.402 01:42:22 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:52.402 01:42:22 -- pm/common@17 -- # local monitor 00:04:52.402 01:42:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.402 01:42:22 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:52.402 01:42:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.402 01:42:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.402 01:42:22 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:52.402 01:42:22 -- pm/common@25 -- # sleep 1 00:04:52.662 01:42:22 -- pm/common@21 -- # date +%s 00:04:52.662 01:42:22 -- pm/common@21 -- # date +%s 00:04:52.662 01:42:22 -- pm/common@21 -- # date +%s 00:04:52.662 01:42:22 -- pm/common@21 -- # date +%s 00:04:52.662 01:42:22 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728430942 00:04:52.662 01:42:22 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728430942 00:04:52.662 01:42:22 -- pm/common@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728430942 00:04:52.662 01:42:22 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728430942 00:04:52.662 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728430942_collect-vmstat.pm.log 00:04:52.662 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728430942_collect-cpu-load.pm.log 00:04:52.662 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728430942_collect-cpu-temp.pm.log 00:04:52.662 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728430942_collect-bmc-pm.bmc.pm.log 00:04:53.599 01:42:23 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:53.600 01:42:23 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:53.600 01:42:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:53.600 01:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:53.600 01:42:23 -- spdk/autotest.sh@59 -- # create_test_list 00:04:53.600 01:42:23 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:53.600 01:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:53.600 01:42:23 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/autotest.sh 00:04:53.600 01:42:23 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:53.600 01:42:23 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:53.600 01:42:23 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:04:53.600 01:42:23 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:04:53.600 01:42:23 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:53.600 01:42:23 -- common/autotest_common.sh@1455 -- # uname 00:04:53.600 01:42:23 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:53.600 01:42:23 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:53.600 01:42:23 -- common/autotest_common.sh@1475 -- # uname 00:04:53.600 01:42:23 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:53.600 01:42:23 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:53.600 01:42:23 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh --version 00:04:53.600 lcov: LCOV version 1.15 00:04:53.600 01:42:23 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info 00:05:00.161 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:06.724 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcno 00:05:09.258 01:42:38 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:09.258 01:42:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.258 01:42:38 -- common/autotest_common.sh@10 -- # set +x 00:05:09.258 01:42:38 -- spdk/autotest.sh@78 -- # rm -f 00:05:09.258 01:42:38 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.731 0000:1a:00.0 (8086 0a54): Already using the nvme driver 00:05:12.731 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:12.731 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:14.636 01:42:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:14.636 01:42:44 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:14.636 01:42:44 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:14.636 01:42:44 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:14.636 01:42:44 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:14.636 01:42:44 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:14.636 01:42:44 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:14.636 01:42:44 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:14.636 01:42:44 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:14.636 01:42:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:14.636 01:42:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:14.636 01:42:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:14.636 01:42:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:14.636 01:42:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:14.636 01:42:44 -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:14.636 No valid GPT data, bailing 00:05:14.896 01:42:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:14.896 01:42:44 -- scripts/common.sh@394 -- # pt= 00:05:14.896 01:42:44 -- scripts/common.sh@395 -- # return 1 00:05:14.896 01:42:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:14.896 1+0 records in 00:05:14.896 1+0 records out 00:05:14.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668132 s, 157 MB/s 00:05:14.896 01:42:44 -- spdk/autotest.sh@105 -- # sync 00:05:14.896 01:42:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:14.896 01:42:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:14.896 01:42:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:21.467 01:42:49 -- spdk/autotest.sh@111 -- # uname -s 00:05:21.467 01:42:49 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:21.467 01:42:49 -- spdk/autotest.sh@111 -- # [[ 1 -eq 1 ]] 00:05:21.467 01:42:49 -- spdk/autotest.sh@112 -- # run_test setup.sh /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:05:21.467 01:42:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.467 01:42:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.467 01:42:49 -- common/autotest_common.sh@10 -- # set +x 00:05:21.467 ************************************ 00:05:21.467 START TEST setup.sh 00:05:21.467 ************************************ 00:05:21.467 01:42:49 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/test-setup.sh 00:05:21.467 * Looking for test storage... 00:05:21.467 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:21.467 01:42:50 setup.sh -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.467 01:42:50 setup.sh -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.467 01:42:50 setup.sh -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.467 01:42:50 setup.sh -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@344 -- # case "$op" in 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@345 -- # : 1 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.467 01:42:50 setup.sh -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@365 -- # decimal 1 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@353 -- # local d=1 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@355 -- # echo 1 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@366 -- # decimal 2 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@353 -- # local d=2 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@355 -- # echo 2 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.468 01:42:50 setup.sh -- scripts/common.sh@368 -- # return 0 00:05:21.468 01:42:50 setup.sh -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.468 01:42:50 setup.sh -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.468 --rc genhtml_branch_coverage=1 00:05:21.468 --rc genhtml_function_coverage=1 00:05:21.468 --rc genhtml_legend=1 00:05:21.468 --rc geninfo_all_blocks=1 00:05:21.468 --rc geninfo_unexecuted_blocks=1 00:05:21.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.468 ' 00:05:21.468 01:42:50 setup.sh -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.468 --rc genhtml_branch_coverage=1 00:05:21.468 --rc genhtml_function_coverage=1 00:05:21.468 --rc genhtml_legend=1 00:05:21.468 --rc geninfo_all_blocks=1 00:05:21.468 --rc geninfo_unexecuted_blocks=1 00:05:21.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.468 ' 00:05:21.468 01:42:50 setup.sh -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.468 --rc genhtml_branch_coverage=1 00:05:21.468 --rc genhtml_function_coverage=1 00:05:21.468 --rc genhtml_legend=1 00:05:21.468 --rc geninfo_all_blocks=1 00:05:21.468 --rc geninfo_unexecuted_blocks=1 00:05:21.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.468 ' 00:05:21.468 01:42:50 setup.sh -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.468 --rc genhtml_branch_coverage=1 00:05:21.468 --rc genhtml_function_coverage=1 00:05:21.468 --rc genhtml_legend=1 00:05:21.468 --rc geninfo_all_blocks=1 00:05:21.468 --rc geninfo_unexecuted_blocks=1 00:05:21.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.468 ' 00:05:21.468 01:42:50 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:21.468 01:42:50 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:21.468 01:42:50 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:05:21.468 01:42:50 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.468 01:42:50 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.468 01:42:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:21.468 ************************************ 00:05:21.468 START TEST acl 00:05:21.468 ************************************ 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/acl.sh 00:05:21.468 * Looking for test storage... 00:05:21.468 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1681 -- # lcov --version 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@344 -- # case "$op" in 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@345 -- # : 1 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@365 -- # decimal 1 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@353 -- # local d=1 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@355 -- # echo 1 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@366 -- # decimal 2 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@353 -- # local d=2 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@355 -- # echo 2 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.468 01:42:50 setup.sh.acl -- scripts/common.sh@368 -- # return 0 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:21.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.468 --rc genhtml_branch_coverage=1 00:05:21.468 --rc genhtml_function_coverage=1 00:05:21.468 --rc genhtml_legend=1 00:05:21.468 --rc geninfo_all_blocks=1 00:05:21.468 --rc geninfo_unexecuted_blocks=1 00:05:21.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.468 ' 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:21.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.468 --rc genhtml_branch_coverage=1 00:05:21.468 --rc genhtml_function_coverage=1 00:05:21.468 --rc genhtml_legend=1 00:05:21.468 --rc geninfo_all_blocks=1 00:05:21.468 --rc geninfo_unexecuted_blocks=1 00:05:21.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.468 ' 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:21.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.468 --rc genhtml_branch_coverage=1 00:05:21.468 --rc genhtml_function_coverage=1 00:05:21.468 --rc genhtml_legend=1 00:05:21.468 --rc geninfo_all_blocks=1 00:05:21.468 --rc geninfo_unexecuted_blocks=1 00:05:21.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.468 ' 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:21.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.468 --rc genhtml_branch_coverage=1 00:05:21.468 --rc genhtml_function_coverage=1 00:05:21.468 --rc genhtml_legend=1 00:05:21.468 --rc geninfo_all_blocks=1 00:05:21.468 --rc geninfo_unexecuted_blocks=1 00:05:21.468 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:21.468 ' 00:05:21.468 01:42:50 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:21.468 01:42:50 setup.sh.acl -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:21.468 01:42:50 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:21.468 01:42:50 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:21.468 01:42:50 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:21.468 01:42:50 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:21.468 01:42:50 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:21.468 01:42:50 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:21.468 01:42:50 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:26.742 01:42:56 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:26.742 01:42:56 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:26.742 01:42:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:26.742 01:42:56 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:26.742 01:42:56 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.742 01:42:56 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:05:30.035 Hugepages 00:05:30.035 node hugesize free / total 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 00:05:30.035 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:1a:00.0 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:30.035 01:42:59 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:30.295 01:42:59 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:30.295 01:42:59 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:30.295 01:42:59 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:30.295 01:42:59 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:30.295 01:42:59 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:30.295 ************************************ 00:05:30.295 START TEST denied 00:05:30.295 ************************************ 00:05:30.295 01:42:59 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:05:30.295 01:42:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:1a:00.0' 00:05:30.295 01:42:59 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:30.295 01:42:59 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:1a:00.0' 00:05:30.295 01:42:59 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:30.295 01:42:59 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:36.866 0000:1a:00.0 (8086 0a54): Skipping denied controller at 0000:1a:00.0 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:1a:00.0 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:1a:00.0 ]] 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:1a:00.0/driver 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:36.866 01:43:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:43.435 00:05:43.435 real 0m12.157s 00:05:43.435 user 0m3.717s 00:05:43.435 sys 0m7.628s 00:05:43.435 01:43:11 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:43.435 01:43:11 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:43.435 ************************************ 00:05:43.435 END TEST denied 00:05:43.435 ************************************ 00:05:43.435 01:43:11 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:43.435 01:43:11 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:43.435 01:43:11 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:43.435 01:43:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:43.435 ************************************ 00:05:43.435 START TEST allowed 00:05:43.435 ************************************ 00:05:43.435 01:43:11 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:05:43.435 01:43:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:1a:00.0 00:05:43.435 01:43:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:43.435 01:43:11 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:1a:00.0 .*: nvme -> .*' 00:05:43.435 01:43:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.435 01:43:11 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:05:51.555 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:05:51.555 01:43:20 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:51.555 01:43:20 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:51.555 01:43:20 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:51.555 01:43:20 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:51.555 01:43:20 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:05:58.123 00:05:58.123 real 0m15.121s 00:05:58.123 user 0m4.018s 00:05:58.123 sys 0m7.882s 00:05:58.123 01:43:27 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.123 01:43:27 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 ************************************ 00:05:58.123 END TEST allowed 00:05:58.123 ************************************ 00:05:58.123 00:05:58.123 real 0m36.984s 00:05:58.123 user 0m11.231s 00:05:58.123 sys 0m21.927s 00:05:58.123 01:43:27 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.123 01:43:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 ************************************ 00:05:58.123 END TEST acl 00:05:58.123 ************************************ 00:05:58.123 01:43:27 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:05:58.123 01:43:27 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.123 01:43:27 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.123 01:43:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:58.123 ************************************ 00:05:58.123 START TEST hugepages 00:05:58.123 ************************************ 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/hugepages.sh 00:05:58.123 * Looking for test storage... 00:05:58.123 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lcov --version 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@344 -- # case "$op" in 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@345 -- # : 1 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@365 -- # decimal 1 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=1 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 1 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@366 -- # decimal 2 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@353 -- # local d=2 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@355 -- # echo 2 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.123 01:43:27 setup.sh.hugepages -- scripts/common.sh@368 -- # return 0 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.123 --rc genhtml_branch_coverage=1 00:05:58.123 --rc genhtml_function_coverage=1 00:05:58.123 --rc genhtml_legend=1 00:05:58.123 --rc geninfo_all_blocks=1 00:05:58.123 --rc geninfo_unexecuted_blocks=1 00:05:58.123 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.123 ' 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.123 --rc genhtml_branch_coverage=1 00:05:58.123 --rc genhtml_function_coverage=1 00:05:58.123 --rc genhtml_legend=1 00:05:58.123 --rc geninfo_all_blocks=1 00:05:58.123 --rc geninfo_unexecuted_blocks=1 00:05:58.123 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.123 ' 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.123 --rc genhtml_branch_coverage=1 00:05:58.123 --rc genhtml_function_coverage=1 00:05:58.123 --rc genhtml_legend=1 00:05:58.123 --rc geninfo_all_blocks=1 00:05:58.123 --rc geninfo_unexecuted_blocks=1 00:05:58.123 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.123 ' 00:05:58.123 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:58.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.123 --rc genhtml_branch_coverage=1 00:05:58.123 --rc genhtml_function_coverage=1 00:05:58.123 --rc genhtml_legend=1 00:05:58.123 --rc geninfo_all_blocks=1 00:05:58.123 --rc geninfo_unexecuted_blocks=1 00:05:58.123 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:05:58.123 ' 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 72060924 kB' 'MemAvailable: 76261696 kB' 'Buffers: 9772 kB' 'Cached: 12516736 kB' 'SwapCached: 0 kB' 'Active: 8933164 kB' 'Inactive: 4107900 kB' 'Active(anon): 8510872 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517844 kB' 'Mapped: 204280 kB' 'Shmem: 7996316 kB' 'KReclaimable: 506500 kB' 'Slab: 1112292 kB' 'SReclaimable: 506500 kB' 'SUnreclaim: 605792 kB' 'KernelStack: 17472 kB' 'PageTables: 8680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52434172 kB' 'Committed_AS: 9840852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213080 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.123 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.124 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:05:58.125 01:43:27 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:05:58.125 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:58.125 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:58.125 01:43:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:58.125 ************************************ 00:05:58.125 START TEST single_node_setup 00:05:58.125 ************************************ 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1125 -- # single_node_setup 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:58.125 01:43:27 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:00.659 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:00.918 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:04.206 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74193928 kB' 'MemAvailable: 78394716 kB' 'Buffers: 9772 kB' 'Cached: 12517576 kB' 'SwapCached: 0 kB' 'Active: 8933192 kB' 'Inactive: 4107900 kB' 'Active(anon): 8510900 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517000 kB' 'Mapped: 204280 kB' 'Shmem: 7997156 kB' 'KReclaimable: 506516 kB' 'Slab: 1111188 kB' 'SReclaimable: 506516 kB' 'SUnreclaim: 604672 kB' 'KernelStack: 17408 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9843604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213160 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.746 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:06:06.747 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74193768 kB' 'MemAvailable: 78394556 kB' 'Buffers: 9772 kB' 'Cached: 12517576 kB' 'SwapCached: 0 kB' 'Active: 8932800 kB' 'Inactive: 4107900 kB' 'Active(anon): 8510508 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516628 kB' 'Mapped: 204248 kB' 'Shmem: 7997156 kB' 'KReclaimable: 506516 kB' 'Slab: 1111276 kB' 'SReclaimable: 506516 kB' 'SUnreclaim: 604760 kB' 'KernelStack: 17392 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9843628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213128 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.748 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.749 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74193568 kB' 'MemAvailable: 78394356 kB' 'Buffers: 9772 kB' 'Cached: 12517600 kB' 'SwapCached: 0 kB' 'Active: 8933324 kB' 'Inactive: 4107900 kB' 'Active(anon): 8511032 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517124 kB' 'Mapped: 204248 kB' 'Shmem: 7997180 kB' 'KReclaimable: 506516 kB' 'Slab: 1111276 kB' 'SReclaimable: 506516 kB' 'SUnreclaim: 604760 kB' 'KernelStack: 17440 kB' 'PageTables: 8488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9844148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213144 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.750 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:06:06.751 nr_hugepages=1024 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:06:06.751 resv_hugepages=0 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:06:06.751 surplus_hugepages=0 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:06:06.751 anon_hugepages=0 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:06:06.751 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74193568 kB' 'MemAvailable: 78394356 kB' 'Buffers: 9772 kB' 'Cached: 12517628 kB' 'SwapCached: 0 kB' 'Active: 8933356 kB' 'Inactive: 4107900 kB' 'Active(anon): 8511064 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517156 kB' 'Mapped: 204248 kB' 'Shmem: 7997208 kB' 'KReclaimable: 506516 kB' 'Slab: 1111276 kB' 'SReclaimable: 506516 kB' 'SUnreclaim: 604760 kB' 'KernelStack: 17440 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9844172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213144 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.752 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.753 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 33615272 kB' 'MemUsed: 14449592 kB' 'SwapCached: 0 kB' 'Active: 6794236 kB' 'Inactive: 3878016 kB' 'Active(anon): 6584004 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10420316 kB' 'Mapped: 100336 kB' 'AnonPages: 255064 kB' 'Shmem: 6332068 kB' 'KernelStack: 10360 kB' 'PageTables: 5284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233768 kB' 'Slab: 531776 kB' 'SReclaimable: 233768 kB' 'SUnreclaim: 298008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.754 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:06:06.755 01:43:35 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:06.755 01:43:36 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:06.755 01:43:36 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:06.755 01:43:36 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:06.755 01:43:36 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:06:06.755 node0=1024 expecting 1024 00:06:06.755 01:43:36 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:06:06.755 00:06:06.755 real 0m8.459s 00:06:06.755 user 0m1.588s 00:06:06.755 sys 0m3.681s 00:06:06.755 01:43:36 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:06.755 01:43:36 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 ************************************ 00:06:06.755 END TEST single_node_setup 00:06:06.755 ************************************ 00:06:06.755 01:43:36 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:06:06.755 01:43:36 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:06.755 01:43:36 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:06.755 01:43:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:06.755 ************************************ 00:06:06.755 START TEST even_2G_alloc 00:06:06.755 ************************************ 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:06.755 01:43:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:10.048 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:10.048 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:10.048 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74215308 kB' 'MemAvailable: 78416048 kB' 'Buffers: 9772 kB' 'Cached: 12517900 kB' 'SwapCached: 0 kB' 'Active: 8933120 kB' 'Inactive: 4107900 kB' 'Active(anon): 8510828 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516720 kB' 'Mapped: 203452 kB' 'Shmem: 7997480 kB' 'KReclaimable: 506468 kB' 'Slab: 1111288 kB' 'SReclaimable: 506468 kB' 'SUnreclaim: 604820 kB' 'KernelStack: 17424 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9834100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213176 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.966 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.967 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74214360 kB' 'MemAvailable: 78415100 kB' 'Buffers: 9772 kB' 'Cached: 12517904 kB' 'SwapCached: 0 kB' 'Active: 8934100 kB' 'Inactive: 4107900 kB' 'Active(anon): 8511808 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517656 kB' 'Mapped: 203592 kB' 'Shmem: 7997484 kB' 'KReclaimable: 506468 kB' 'Slab: 1111280 kB' 'SReclaimable: 506468 kB' 'SUnreclaim: 604812 kB' 'KernelStack: 17424 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9834116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213096 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.968 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:11.969 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74214296 kB' 'MemAvailable: 78415036 kB' 'Buffers: 9772 kB' 'Cached: 12517940 kB' 'SwapCached: 0 kB' 'Active: 8933284 kB' 'Inactive: 4107900 kB' 'Active(anon): 8510992 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516548 kB' 'Mapped: 203408 kB' 'Shmem: 7997520 kB' 'KReclaimable: 506468 kB' 'Slab: 1111272 kB' 'SReclaimable: 506468 kB' 'SUnreclaim: 604804 kB' 'KernelStack: 17424 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9834140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213096 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.970 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:11.971 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.233 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:06:12.234 nr_hugepages=1024 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:06:12.234 resv_hugepages=0 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:06:12.234 surplus_hugepages=0 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:06:12.234 anon_hugepages=0 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74215212 kB' 'MemAvailable: 78415952 kB' 'Buffers: 9772 kB' 'Cached: 12517944 kB' 'SwapCached: 0 kB' 'Active: 8933496 kB' 'Inactive: 4107900 kB' 'Active(anon): 8511204 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516992 kB' 'Mapped: 203408 kB' 'Shmem: 7997524 kB' 'KReclaimable: 506468 kB' 'Slab: 1111272 kB' 'SReclaimable: 506468 kB' 'SUnreclaim: 604804 kB' 'KernelStack: 17408 kB' 'PageTables: 8296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9834160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213096 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.234 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.235 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34670280 kB' 'MemUsed: 13394584 kB' 'SwapCached: 0 kB' 'Active: 6793452 kB' 'Inactive: 3878016 kB' 'Active(anon): 6583220 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10420472 kB' 'Mapped: 99844 kB' 'AnonPages: 254128 kB' 'Shmem: 6332224 kB' 'KernelStack: 10312 kB' 'PageTables: 5072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233760 kB' 'Slab: 531768 kB' 'SReclaimable: 233760 kB' 'SUnreclaim: 298008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.236 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 39549160 kB' 'MemUsed: 4671420 kB' 'SwapCached: 0 kB' 'Active: 2140396 kB' 'Inactive: 229884 kB' 'Active(anon): 1928336 kB' 'Inactive(anon): 0 kB' 'Active(file): 212060 kB' 'Inactive(file): 229884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2107288 kB' 'Mapped: 103564 kB' 'AnonPages: 263228 kB' 'Shmem: 1665344 kB' 'KernelStack: 7112 kB' 'PageTables: 3280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272708 kB' 'Slab: 579504 kB' 'SReclaimable: 272708 kB' 'SUnreclaim: 306796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.237 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.238 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:06:12.239 node0=512 expecting 512 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:06:12.239 node1=512 expecting 512 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:06:12.239 00:06:12.239 real 0m5.632s 00:06:12.239 user 0m1.922s 00:06:12.239 sys 0m3.729s 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.239 01:43:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:12.239 ************************************ 00:06:12.239 END TEST even_2G_alloc 00:06:12.239 ************************************ 00:06:12.239 01:43:41 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:06:12.239 01:43:41 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.239 01:43:41 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.239 01:43:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:12.239 ************************************ 00:06:12.239 START TEST odd_alloc 00:06:12.239 ************************************ 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:12.239 01:43:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:15.526 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:15.526 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:15.526 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74220624 kB' 'MemAvailable: 78421364 kB' 'Buffers: 9772 kB' 'Cached: 12518088 kB' 'SwapCached: 0 kB' 'Active: 8934976 kB' 'Inactive: 4107900 kB' 'Active(anon): 8512684 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518316 kB' 'Mapped: 203552 kB' 'Shmem: 7997668 kB' 'KReclaimable: 506468 kB' 'Slab: 1110796 kB' 'SReclaimable: 506468 kB' 'SUnreclaim: 604328 kB' 'KernelStack: 17424 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9834948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213224 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.438 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74219368 kB' 'MemAvailable: 78420108 kB' 'Buffers: 9772 kB' 'Cached: 12518088 kB' 'SwapCached: 0 kB' 'Active: 8935924 kB' 'Inactive: 4107900 kB' 'Active(anon): 8513632 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519332 kB' 'Mapped: 203552 kB' 'Shmem: 7997668 kB' 'KReclaimable: 506468 kB' 'Slab: 1110844 kB' 'SReclaimable: 506468 kB' 'SUnreclaim: 604376 kB' 'KernelStack: 17488 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9834620 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213176 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.439 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.440 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74220388 kB' 'MemAvailable: 78421128 kB' 'Buffers: 9772 kB' 'Cached: 12518112 kB' 'SwapCached: 0 kB' 'Active: 8934864 kB' 'Inactive: 4107900 kB' 'Active(anon): 8512572 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518172 kB' 'Mapped: 203580 kB' 'Shmem: 7997692 kB' 'KReclaimable: 506468 kB' 'Slab: 1110792 kB' 'SReclaimable: 506468 kB' 'SUnreclaim: 604324 kB' 'KernelStack: 17424 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9834848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213144 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.441 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.442 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:06:17.443 nr_hugepages=1025 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:06:17.443 resv_hugepages=0 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:06:17.443 surplus_hugepages=0 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:06:17.443 anon_hugepages=0 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74220388 kB' 'MemAvailable: 78421128 kB' 'Buffers: 9772 kB' 'Cached: 12518148 kB' 'SwapCached: 0 kB' 'Active: 8934940 kB' 'Inactive: 4107900 kB' 'Active(anon): 8512648 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518268 kB' 'Mapped: 203476 kB' 'Shmem: 7997728 kB' 'KReclaimable: 506468 kB' 'Slab: 1110792 kB' 'SReclaimable: 506468 kB' 'SUnreclaim: 604324 kB' 'KernelStack: 17424 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53481724 kB' 'Committed_AS: 9834868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213144 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.443 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.444 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34666316 kB' 'MemUsed: 13398548 kB' 'SwapCached: 0 kB' 'Active: 6794840 kB' 'Inactive: 3878016 kB' 'Active(anon): 6584608 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10420576 kB' 'Mapped: 99852 kB' 'AnonPages: 255512 kB' 'Shmem: 6332328 kB' 'KernelStack: 10344 kB' 'PageTables: 5216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233760 kB' 'Slab: 531560 kB' 'SReclaimable: 233760 kB' 'SUnreclaim: 297800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.445 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 39557332 kB' 'MemUsed: 4663248 kB' 'SwapCached: 0 kB' 'Active: 2140664 kB' 'Inactive: 229884 kB' 'Active(anon): 1928604 kB' 'Inactive(anon): 0 kB' 'Active(file): 212060 kB' 'Inactive(file): 229884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2107344 kB' 'Mapped: 104128 kB' 'AnonPages: 263336 kB' 'Shmem: 1665400 kB' 'KernelStack: 7080 kB' 'PageTables: 3172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272708 kB' 'Slab: 579232 kB' 'SReclaimable: 272708 kB' 'SUnreclaim: 306524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.446 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:06:17.447 node0=513 expecting 513 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:06:17.447 node1=512 expecting 512 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:06:17.447 00:06:17.447 real 0m5.057s 00:06:17.447 user 0m1.553s 00:06:17.447 sys 0m3.462s 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.447 01:43:46 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:17.447 ************************************ 00:06:17.447 END TEST odd_alloc 00:06:17.447 ************************************ 00:06:17.447 01:43:46 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:06:17.447 01:43:46 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.447 01:43:46 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.447 01:43:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:17.447 ************************************ 00:06:17.447 START TEST custom_alloc 00:06:17.447 ************************************ 00:06:17.447 01:43:46 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:06:17.447 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:06:17.447 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:06:17.447 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:06:17.447 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:06:17.447 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:06:17.447 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:17.448 01:43:46 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:20.736 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:20.736 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:20.736 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.649 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73166220 kB' 'MemAvailable: 77366944 kB' 'Buffers: 9772 kB' 'Cached: 12518296 kB' 'SwapCached: 0 kB' 'Active: 8936268 kB' 'Inactive: 4107900 kB' 'Active(anon): 8513976 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519052 kB' 'Mapped: 203640 kB' 'Shmem: 7997876 kB' 'KReclaimable: 506452 kB' 'Slab: 1110560 kB' 'SReclaimable: 506452 kB' 'SUnreclaim: 604108 kB' 'KernelStack: 17424 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9835532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213240 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.650 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73165456 kB' 'MemAvailable: 77366172 kB' 'Buffers: 9772 kB' 'Cached: 12518300 kB' 'SwapCached: 0 kB' 'Active: 8935872 kB' 'Inactive: 4107900 kB' 'Active(anon): 8513580 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519060 kB' 'Mapped: 203560 kB' 'Shmem: 7997880 kB' 'KReclaimable: 506444 kB' 'Slab: 1110504 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 604060 kB' 'KernelStack: 17408 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9835548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213224 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.651 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.652 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73165916 kB' 'MemAvailable: 77366632 kB' 'Buffers: 9772 kB' 'Cached: 12518316 kB' 'SwapCached: 0 kB' 'Active: 8935868 kB' 'Inactive: 4107900 kB' 'Active(anon): 8513576 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519060 kB' 'Mapped: 203560 kB' 'Shmem: 7997896 kB' 'KReclaimable: 506444 kB' 'Slab: 1110504 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 604060 kB' 'KernelStack: 17408 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9835568 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213224 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.653 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.654 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:06:22.655 nr_hugepages=1536 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:06:22.655 resv_hugepages=0 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:06:22.655 surplus_hugepages=0 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:06:22.655 anon_hugepages=0 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 73166168 kB' 'MemAvailable: 77366884 kB' 'Buffers: 9772 kB' 'Cached: 12518340 kB' 'SwapCached: 0 kB' 'Active: 8935896 kB' 'Inactive: 4107900 kB' 'Active(anon): 8513604 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519060 kB' 'Mapped: 203560 kB' 'Shmem: 7997920 kB' 'KReclaimable: 506444 kB' 'Slab: 1110504 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 604060 kB' 'KernelStack: 17408 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52958460 kB' 'Committed_AS: 9835592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213224 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.655 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:22.656 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 34672988 kB' 'MemUsed: 13391876 kB' 'SwapCached: 0 kB' 'Active: 6794432 kB' 'Inactive: 3878016 kB' 'Active(anon): 6584200 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10420668 kB' 'Mapped: 99856 kB' 'AnonPages: 255040 kB' 'Shmem: 6332420 kB' 'KernelStack: 10344 kB' 'PageTables: 5164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233760 kB' 'Slab: 531380 kB' 'SReclaimable: 233760 kB' 'SUnreclaim: 297620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.657 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44220580 kB' 'MemFree: 38496676 kB' 'MemUsed: 5723904 kB' 'SwapCached: 0 kB' 'Active: 2141448 kB' 'Inactive: 229884 kB' 'Active(anon): 1929388 kB' 'Inactive(anon): 0 kB' 'Active(file): 212060 kB' 'Inactive(file): 229884 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2107484 kB' 'Mapped: 103704 kB' 'AnonPages: 264016 kB' 'Shmem: 1665540 kB' 'KernelStack: 7064 kB' 'PageTables: 3128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 272684 kB' 'Slab: 579116 kB' 'SReclaimable: 272684 kB' 'SUnreclaim: 306432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.658 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:06:22.659 node0=512 expecting 512 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:06:22.659 node1=1024 expecting 1024 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:06:22.659 00:06:22.659 real 0m5.273s 00:06:22.659 user 0m1.635s 00:06:22.659 sys 0m3.534s 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.659 01:43:52 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:22.659 ************************************ 00:06:22.659 END TEST custom_alloc 00:06:22.659 ************************************ 00:06:22.659 01:43:52 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:06:22.659 01:43:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.659 01:43:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.659 01:43:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:22.659 ************************************ 00:06:22.659 START TEST no_shrink_alloc 00:06:22.659 ************************************ 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:06:22.659 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:06:22.660 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:06:22.660 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:22.660 01:43:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:26.060 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:26.060 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:26.060 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:26.060 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:26.061 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74201060 kB' 'MemAvailable: 78401776 kB' 'Buffers: 9772 kB' 'Cached: 12518488 kB' 'SwapCached: 0 kB' 'Active: 8937356 kB' 'Inactive: 4107900 kB' 'Active(anon): 8515064 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520308 kB' 'Mapped: 203740 kB' 'Shmem: 7998068 kB' 'KReclaimable: 506444 kB' 'Slab: 1111540 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 605096 kB' 'KernelStack: 17488 kB' 'PageTables: 8492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9836232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213224 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.600 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:28.601 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74202068 kB' 'MemAvailable: 78402784 kB' 'Buffers: 9772 kB' 'Cached: 12518488 kB' 'SwapCached: 0 kB' 'Active: 8938088 kB' 'Inactive: 4107900 kB' 'Active(anon): 8515796 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521128 kB' 'Mapped: 203640 kB' 'Shmem: 7998068 kB' 'KReclaimable: 506444 kB' 'Slab: 1111544 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 605100 kB' 'KernelStack: 17424 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9837384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213208 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.602 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.603 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74201720 kB' 'MemAvailable: 78402436 kB' 'Buffers: 9772 kB' 'Cached: 12518488 kB' 'SwapCached: 0 kB' 'Active: 8938060 kB' 'Inactive: 4107900 kB' 'Active(anon): 8515768 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521060 kB' 'Mapped: 203648 kB' 'Shmem: 7998068 kB' 'KReclaimable: 506444 kB' 'Slab: 1111544 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 605100 kB' 'KernelStack: 17360 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9838904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213224 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.604 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:06:28.605 nr_hugepages=1024 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:06:28.605 resv_hugepages=0 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:06:28.605 surplus_hugepages=0 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:06:28.605 anon_hugepages=0 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.605 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74203516 kB' 'MemAvailable: 78404232 kB' 'Buffers: 9772 kB' 'Cached: 12518492 kB' 'SwapCached: 0 kB' 'Active: 8938304 kB' 'Inactive: 4107900 kB' 'Active(anon): 8516012 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521352 kB' 'Mapped: 203660 kB' 'Shmem: 7998072 kB' 'KReclaimable: 506444 kB' 'Slab: 1111544 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 605100 kB' 'KernelStack: 17552 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9838928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213320 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.606 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 33617608 kB' 'MemUsed: 14447256 kB' 'SwapCached: 0 kB' 'Active: 6794652 kB' 'Inactive: 3878016 kB' 'Active(anon): 6584420 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10420700 kB' 'Mapped: 99880 kB' 'AnonPages: 255124 kB' 'Shmem: 6332452 kB' 'KernelStack: 10504 kB' 'PageTables: 5220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233760 kB' 'Slab: 531668 kB' 'SReclaimable: 233760 kB' 'SUnreclaim: 297908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.607 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.608 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:06:28.609 node0=1024 expecting 1024 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:06:28.609 01:43:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:06:31.899 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:1a:00.0 (8086 0a54): Already using the vfio-pci driver 00:06:31.899 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:06:31.899 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:06:34.438 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74224648 kB' 'MemAvailable: 78425364 kB' 'Buffers: 9772 kB' 'Cached: 12518676 kB' 'SwapCached: 0 kB' 'Active: 8935516 kB' 'Inactive: 4107900 kB' 'Active(anon): 8513224 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518288 kB' 'Mapped: 203900 kB' 'Shmem: 7998256 kB' 'KReclaimable: 506444 kB' 'Slab: 1112228 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 605784 kB' 'KernelStack: 17360 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9837060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213272 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.438 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:06:34.439 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74226448 kB' 'MemAvailable: 78427164 kB' 'Buffers: 9772 kB' 'Cached: 12518680 kB' 'SwapCached: 0 kB' 'Active: 8935628 kB' 'Inactive: 4107900 kB' 'Active(anon): 8513336 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518400 kB' 'Mapped: 203700 kB' 'Shmem: 7998260 kB' 'KReclaimable: 506444 kB' 'Slab: 1112200 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 605756 kB' 'KernelStack: 17424 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9838108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213240 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.440 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.441 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74222704 kB' 'MemAvailable: 78423420 kB' 'Buffers: 9772 kB' 'Cached: 12518696 kB' 'SwapCached: 0 kB' 'Active: 8938312 kB' 'Inactive: 4107900 kB' 'Active(anon): 8516020 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521148 kB' 'Mapped: 204460 kB' 'Shmem: 7998276 kB' 'KReclaimable: 506444 kB' 'Slab: 1112200 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 605756 kB' 'KernelStack: 17424 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9840700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213208 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.442 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:06:34.443 nr_hugepages=1024 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:06:34.443 resv_hugepages=0 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:06:34.443 surplus_hugepages=0 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:06:34.443 anon_hugepages=0 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.443 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92285444 kB' 'MemFree: 74218672 kB' 'MemAvailable: 78419388 kB' 'Buffers: 9772 kB' 'Cached: 12518736 kB' 'SwapCached: 0 kB' 'Active: 8940872 kB' 'Inactive: 4107900 kB' 'Active(anon): 8518580 kB' 'Inactive(anon): 0 kB' 'Active(file): 422292 kB' 'Inactive(file): 4107900 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523616 kB' 'Mapped: 204552 kB' 'Shmem: 7998316 kB' 'KReclaimable: 506444 kB' 'Slab: 1112200 kB' 'SReclaimable: 506444 kB' 'SUnreclaim: 605756 kB' 'KernelStack: 17408 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53482748 kB' 'Committed_AS: 9842876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 213212 kB' 'VmallocChunk: 0 kB' 'Percpu: 69984 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 486840 kB' 'DirectMap2M: 7577600 kB' 'DirectMap1G: 94371840 kB' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.444 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48064864 kB' 'MemFree: 33620208 kB' 'MemUsed: 14444656 kB' 'SwapCached: 0 kB' 'Active: 6794480 kB' 'Inactive: 3878016 kB' 'Active(anon): 6584248 kB' 'Inactive(anon): 0 kB' 'Active(file): 210232 kB' 'Inactive(file): 3878016 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10420740 kB' 'Mapped: 99872 kB' 'AnonPages: 254944 kB' 'Shmem: 6332492 kB' 'KernelStack: 10328 kB' 'PageTables: 5092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 233760 kB' 'Slab: 531680 kB' 'SReclaimable: 233760 kB' 'SUnreclaim: 297920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.445 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.446 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:06:34.447 node0=1024 expecting 1024 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:06:34.447 00:06:34.447 real 0m11.501s 00:06:34.447 user 0m4.013s 00:06:34.447 sys 0m7.459s 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.447 01:44:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:06:34.447 ************************************ 00:06:34.447 END TEST no_shrink_alloc 00:06:34.447 ************************************ 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:06:34.447 01:44:03 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:06:34.447 00:06:34.447 real 0m36.586s 00:06:34.447 user 0m10.991s 00:06:34.447 sys 0m22.296s 00:06:34.447 01:44:03 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.447 01:44:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:06:34.447 ************************************ 00:06:34.447 END TEST hugepages 00:06:34.447 ************************************ 00:06:34.447 01:44:03 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:06:34.447 01:44:03 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.447 01:44:03 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.447 01:44:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:34.447 ************************************ 00:06:34.447 START TEST driver 00:06:34.447 ************************************ 00:06:34.447 01:44:03 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/driver.sh 00:06:34.447 * Looking for test storage... 00:06:34.447 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1681 -- # lcov --version 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@344 -- # case "$op" in 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@345 -- # : 1 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@365 -- # decimal 1 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@353 -- # local d=1 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@355 -- # echo 1 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@366 -- # decimal 2 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@353 -- # local d=2 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@355 -- # echo 2 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.447 01:44:04 setup.sh.driver -- scripts/common.sh@368 -- # return 0 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:34.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.447 --rc genhtml_branch_coverage=1 00:06:34.447 --rc genhtml_function_coverage=1 00:06:34.447 --rc genhtml_legend=1 00:06:34.447 --rc geninfo_all_blocks=1 00:06:34.447 --rc geninfo_unexecuted_blocks=1 00:06:34.447 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.447 ' 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:34.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.447 --rc genhtml_branch_coverage=1 00:06:34.447 --rc genhtml_function_coverage=1 00:06:34.447 --rc genhtml_legend=1 00:06:34.447 --rc geninfo_all_blocks=1 00:06:34.447 --rc geninfo_unexecuted_blocks=1 00:06:34.447 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.447 ' 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:34.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.447 --rc genhtml_branch_coverage=1 00:06:34.447 --rc genhtml_function_coverage=1 00:06:34.447 --rc genhtml_legend=1 00:06:34.447 --rc geninfo_all_blocks=1 00:06:34.447 --rc geninfo_unexecuted_blocks=1 00:06:34.447 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.447 ' 00:06:34.447 01:44:04 setup.sh.driver -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:34.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.447 --rc genhtml_branch_coverage=1 00:06:34.447 --rc genhtml_function_coverage=1 00:06:34.447 --rc genhtml_legend=1 00:06:34.447 --rc geninfo_all_blocks=1 00:06:34.447 --rc geninfo_unexecuted_blocks=1 00:06:34.447 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:34.447 ' 00:06:34.447 01:44:04 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:06:34.447 01:44:04 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:34.447 01:44:04 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:42.572 01:44:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:06:42.572 01:44:11 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:42.572 01:44:11 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.572 01:44:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:42.572 ************************************ 00:06:42.572 START TEST guess_driver 00:06:42.572 ************************************ 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 190 > 0 )) 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:06:42.572 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:06:42.572 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:42.572 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:42.572 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:06:42.572 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:06:42.572 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:06:42.572 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:06:42.572 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:06:42.573 Looking for driver=vfio-pci 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:06:42.573 01:44:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:45.110 01:44:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:48.406 01:44:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:06:48.406 01:44:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:06:48.406 01:44:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:06:50.310 01:44:19 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:06:50.310 01:44:19 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:06:50.310 01:44:19 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:50.310 01:44:19 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:06:58.433 00:06:58.433 real 0m15.590s 00:06:58.433 user 0m3.780s 00:06:58.433 sys 0m7.736s 00:06:58.433 01:44:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.433 01:44:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:06:58.433 ************************************ 00:06:58.433 END TEST guess_driver 00:06:58.433 ************************************ 00:06:58.433 00:06:58.433 real 0m22.932s 00:06:58.433 user 0m6.056s 00:06:58.433 sys 0m12.079s 00:06:58.433 01:44:26 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.433 01:44:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:06:58.433 ************************************ 00:06:58.433 END TEST driver 00:06:58.433 ************************************ 00:06:58.433 01:44:26 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:06:58.433 01:44:26 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.433 01:44:26 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.433 01:44:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:06:58.433 ************************************ 00:06:58.433 START TEST devices 00:06:58.433 ************************************ 00:06:58.433 01:44:26 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/devices.sh 00:06:58.433 * Looking for test storage... 00:06:58.433 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup 00:06:58.433 01:44:27 setup.sh.devices -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:58.433 01:44:27 setup.sh.devices -- common/autotest_common.sh@1681 -- # lcov --version 00:06:58.433 01:44:27 setup.sh.devices -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:58.433 01:44:27 setup.sh.devices -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@344 -- # case "$op" in 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@345 -- # : 1 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@365 -- # decimal 1 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@353 -- # local d=1 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@355 -- # echo 1 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@366 -- # decimal 2 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@353 -- # local d=2 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@355 -- # echo 2 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.433 01:44:27 setup.sh.devices -- scripts/common.sh@368 -- # return 0 00:06:58.433 01:44:27 setup.sh.devices -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.433 01:44:27 setup.sh.devices -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:58.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.433 --rc genhtml_branch_coverage=1 00:06:58.433 --rc genhtml_function_coverage=1 00:06:58.433 --rc genhtml_legend=1 00:06:58.433 --rc geninfo_all_blocks=1 00:06:58.433 --rc geninfo_unexecuted_blocks=1 00:06:58.433 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.433 ' 00:06:58.433 01:44:27 setup.sh.devices -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:58.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.433 --rc genhtml_branch_coverage=1 00:06:58.433 --rc genhtml_function_coverage=1 00:06:58.433 --rc genhtml_legend=1 00:06:58.434 --rc geninfo_all_blocks=1 00:06:58.434 --rc geninfo_unexecuted_blocks=1 00:06:58.434 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.434 ' 00:06:58.434 01:44:27 setup.sh.devices -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.434 --rc genhtml_branch_coverage=1 00:06:58.434 --rc genhtml_function_coverage=1 00:06:58.434 --rc genhtml_legend=1 00:06:58.434 --rc geninfo_all_blocks=1 00:06:58.434 --rc geninfo_unexecuted_blocks=1 00:06:58.434 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.434 ' 00:06:58.434 01:44:27 setup.sh.devices -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:58.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.434 --rc genhtml_branch_coverage=1 00:06:58.434 --rc genhtml_function_coverage=1 00:06:58.434 --rc genhtml_legend=1 00:06:58.434 --rc geninfo_all_blocks=1 00:06:58.434 --rc geninfo_unexecuted_blocks=1 00:06:58.434 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:06:58.434 ' 00:06:58.434 01:44:27 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:06:58.434 01:44:27 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:06:58.434 01:44:27 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:06:58.434 01:44:27 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:07:03.710 01:44:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:07:03.710 01:44:32 setup.sh.devices -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:03.710 01:44:32 setup.sh.devices -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:03.710 01:44:32 setup.sh.devices -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:03.710 01:44:32 setup.sh.devices -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:03.710 01:44:33 setup.sh.devices -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:03.710 01:44:33 setup.sh.devices -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:03.710 01:44:33 setup.sh.devices -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:03.710 01:44:33 setup.sh.devices -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:1a:00.0 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\1\a\:\0\0\.\0* ]] 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:07:03.710 01:44:33 setup.sh.devices -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:07:03.710 01:44:33 setup.sh.devices -- scripts/common.sh@390 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:07:03.710 No valid GPT data, bailing 00:07:03.710 01:44:33 setup.sh.devices -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:03.710 01:44:33 setup.sh.devices -- scripts/common.sh@394 -- # pt= 00:07:03.710 01:44:33 setup.sh.devices -- scripts/common.sh@395 -- # return 1 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:07:03.710 01:44:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:03.710 01:44:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:03.710 01:44:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4000787030016 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:1a:00.0 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:07:03.710 01:44:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:07:03.710 01:44:33 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.710 01:44:33 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.710 01:44:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:03.710 ************************************ 00:07:03.710 START TEST nvme_mount 00:07:03.710 ************************************ 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:03.710 01:44:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:07:04.648 Creating new GPT entries in memory. 00:07:04.648 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:04.648 other utilities. 00:07:04.648 01:44:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:04.648 01:44:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:04.648 01:44:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:04.648 01:44:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:04.648 01:44:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:05.585 Creating new GPT entries in memory. 00:07:05.585 The operation has completed successfully. 00:07:05.585 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:05.585 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:05.586 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 4011706 00:07:05.586 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:05.586 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size= 00:07:05.586 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:05.586 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:07:05.586 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:07:05.845 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:05.845 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:05.846 01:44:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:09.136 01:44:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:11.039 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:11.039 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:11.299 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:11.299 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:11.299 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:11.299 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:11.299 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:07:11.299 01:44:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:07:11.299 01:44:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:11.299 01:44:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:07:11.299 01:44:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:07:11.299 01:44:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:11.557 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:1a:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:11.557 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:11.557 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:07:11.557 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:11.557 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:11.558 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:11.558 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:11.558 01:44:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:07:11.558 01:44:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:11.558 01:44:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:11.558 01:44:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:11.558 01:44:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:11.558 01:44:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:11.558 01:44:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.848 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.849 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.849 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.849 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:14.849 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:14.849 01:44:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount ]] 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:1a:00.0 data@nvme0n1 '' '' 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:16.782 01:44:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:20.075 01:44:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:21.981 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:21.981 00:07:21.981 real 0m18.508s 00:07:21.981 user 0m5.227s 00:07:21.981 sys 0m10.795s 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.981 01:44:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:07:21.981 ************************************ 00:07:21.981 END TEST nvme_mount 00:07:21.981 ************************************ 00:07:22.240 01:44:51 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:07:22.240 01:44:51 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:22.240 01:44:51 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:22.240 01:44:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:22.240 ************************************ 00:07:22.240 START TEST dm_mount 00:07:22.240 ************************************ 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:07:22.240 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:07:22.241 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:07:22.241 01:44:51 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:07:23.180 Creating new GPT entries in memory. 00:07:23.180 GPT data structures destroyed! You may now partition the disk using fdisk or 00:07:23.180 other utilities. 00:07:23.180 01:44:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:07:23.180 01:44:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:23.180 01:44:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:23.180 01:44:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:23.180 01:44:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:07:24.117 Creating new GPT entries in memory. 00:07:24.117 The operation has completed successfully. 00:07:24.117 01:44:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:24.117 01:44:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:24.117 01:44:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:07:24.118 01:44:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:07:24.118 01:44:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:07:25.497 The operation has completed successfully. 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 4016598 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount size= 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:1a:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:25.497 01:44:54 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:28.789 01:44:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount ]] 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:1a:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:1a:00.0 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:1a:00.0 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:07:31.421 01:45:00 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh config 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:1a:00.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\1\a\:\0\0\.\0 ]] 00:07:33.969 01:45:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:07:36.502 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:07:36.502 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:07:36.503 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:07:36.503 00:07:36.503 real 0m14.067s 00:07:36.503 user 0m3.663s 00:07:36.503 sys 0m7.367s 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.503 01:45:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:07:36.503 ************************************ 00:07:36.503 END TEST dm_mount 00:07:36.503 ************************************ 00:07:36.503 01:45:05 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:07:36.503 01:45:05 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:07:36.503 01:45:05 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/nvme_mount 00:07:36.503 01:45:05 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:36.503 01:45:05 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:07:36.503 01:45:05 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:07:36.503 01:45:05 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:07:36.503 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:07:36.503 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:07:36.503 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:36.503 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:36.503 01:45:06 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:07:36.503 01:45:06 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/dm_mount 00:07:36.503 01:45:06 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:07:36.503 01:45:06 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:07:36.503 01:45:06 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:07:36.503 01:45:06 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:07:36.503 01:45:06 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:07:36.503 00:07:36.503 real 0m39.198s 00:07:36.503 user 0m10.963s 00:07:36.503 sys 0m22.477s 00:07:36.503 01:45:06 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.503 01:45:06 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:07:36.503 ************************************ 00:07:36.503 END TEST devices 00:07:36.503 ************************************ 00:07:36.503 00:07:36.503 real 2m16.207s 00:07:36.503 user 0m39.452s 00:07:36.503 sys 1m19.116s 00:07:36.503 01:45:06 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.503 01:45:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:07:36.503 ************************************ 00:07:36.503 END TEST setup.sh 00:07:36.503 ************************************ 00:07:36.761 01:45:06 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh status 00:07:40.048 Hugepages 00:07:40.048 node hugesize free / total 00:07:40.048 node0 1048576kB 0 / 0 00:07:40.048 node0 2048kB 1024 / 1024 00:07:40.048 node1 1048576kB 0 / 0 00:07:40.048 node1 2048kB 1024 / 1024 00:07:40.048 00:07:40.048 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:40.048 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:07:40.048 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:07:40.048 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:07:40.048 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:07:40.048 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:07:40.048 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:07:40.048 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:07:40.048 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:07:40.048 NVMe 0000:1a:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:07:40.048 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:07:40.048 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:07:40.048 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:07:40.048 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:07:40.048 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:07:40.048 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:07:40.048 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:07:40.048 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:07:40.048 01:45:09 -- spdk/autotest.sh@117 -- # uname -s 00:07:40.048 01:45:09 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:40.048 01:45:09 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:40.048 01:45:09 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:07:43.337 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:07:43.596 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:07:43.597 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:07:43.597 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:07:46.890 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:07:49.424 01:45:18 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:49.991 01:45:19 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:49.991 01:45:19 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:49.991 01:45:19 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:49.991 01:45:19 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:49.991 01:45:19 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:49.991 01:45:19 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:49.991 01:45:19 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:49.991 01:45:19 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:49.991 01:45:19 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:50.250 01:45:19 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:50.250 01:45:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:07:50.250 01:45:19 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh reset 00:07:53.542 Waiting for block devices as requested 00:07:53.542 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:07:53.542 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:53.542 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:53.542 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:53.542 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:53.801 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:53.801 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:53.801 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:54.061 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:54.061 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:07:54.061 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:07:54.320 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:07:54.320 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:07:54.320 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:07:54.579 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:07:54.579 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:07:54.579 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:07:57.113 01:45:26 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:57.113 01:45:26 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:1a:00.0 00:07:57.113 01:45:26 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:07:57.113 01:45:26 -- common/autotest_common.sh@1485 -- # grep 0000:1a:00.0/nvme/nvme 00:07:57.113 01:45:26 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:07:57.113 01:45:26 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 ]] 00:07:57.113 01:45:26 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:17/0000:17:00.0/0000:18:00.0/0000:19:00.0/0000:1a:00.0/nvme/nvme0 00:07:57.113 01:45:26 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:57.113 01:45:26 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:57.113 01:45:26 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:57.113 01:45:26 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:57.113 01:45:26 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:57.113 01:45:26 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:57.113 01:45:26 -- common/autotest_common.sh@1529 -- # oacs=' 0xe' 00:07:57.113 01:45:26 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:57.113 01:45:26 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:57.113 01:45:26 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:57.113 01:45:26 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:57.113 01:45:26 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:57.113 01:45:26 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:57.113 01:45:26 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:57.113 01:45:26 -- common/autotest_common.sh@1541 -- # continue 00:07:57.113 01:45:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:57.113 01:45:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.113 01:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:57.113 01:45:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:57.113 01:45:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:57.113 01:45:26 -- common/autotest_common.sh@10 -- # set +x 00:07:57.113 01:45:26 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/setup.sh 00:08:00.403 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:08:00.403 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:08:03.694 0000:1a:00.0 (8086 0a54): nvme -> vfio-pci 00:08:05.596 01:45:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:05.596 01:45:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.596 01:45:34 -- common/autotest_common.sh@10 -- # set +x 00:08:05.596 01:45:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:05.596 01:45:35 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:08:05.596 01:45:35 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:08:05.596 01:45:35 -- common/autotest_common.sh@1561 -- # bdfs=() 00:08:05.596 01:45:35 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:08:05.596 01:45:35 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:08:05.596 01:45:35 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:08:05.596 01:45:35 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:08:05.596 01:45:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:05.596 01:45:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:08:05.596 01:45:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:05.596 01:45:35 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:05.596 01:45:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:05.596 01:45:35 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:08:05.596 01:45:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:1a:00.0 00:08:05.596 01:45:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:08:05.596 01:45:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:1a:00.0/device 00:08:05.596 01:45:35 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:08:05.596 01:45:35 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:08:05.596 01:45:35 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:08:05.596 01:45:35 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:08:05.596 01:45:35 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:1a:00.0 00:08:05.596 01:45:35 -- common/autotest_common.sh@1577 -- # [[ -z 0000:1a:00.0 ]] 00:08:05.596 01:45:35 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=4027222 00:08:05.596 01:45:35 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:05.596 01:45:35 -- common/autotest_common.sh@1583 -- # waitforlisten 4027222 00:08:05.596 01:45:35 -- common/autotest_common.sh@831 -- # '[' -z 4027222 ']' 00:08:05.596 01:45:35 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.596 01:45:35 -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.596 01:45:35 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.596 01:45:35 -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.596 01:45:35 -- common/autotest_common.sh@10 -- # set +x 00:08:05.596 [2024-10-09 01:45:35.152774] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:05.596 [2024-10-09 01:45:35.152855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4027222 ] 00:08:05.596 [2024-10-09 01:45:35.229905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.855 [2024-10-09 01:45:35.283707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.855 01:45:35 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:05.855 01:45:35 -- common/autotest_common.sh@864 -- # return 0 00:08:05.855 01:45:35 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:08:05.855 01:45:35 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:08:05.855 01:45:35 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:1a:00.0 00:08:09.157 nvme0n1 00:08:09.157 01:45:38 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:08:09.157 [2024-10-09 01:45:38.728270] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:08:09.157 request: 00:08:09.157 { 00:08:09.157 "nvme_ctrlr_name": "nvme0", 00:08:09.157 "password": "test", 00:08:09.157 "method": "bdev_nvme_opal_revert", 00:08:09.157 "req_id": 1 00:08:09.157 } 00:08:09.157 Got JSON-RPC error response 00:08:09.157 response: 00:08:09.157 { 00:08:09.157 "code": -32602, 00:08:09.157 "message": "Invalid parameters" 00:08:09.157 } 00:08:09.157 01:45:38 -- common/autotest_common.sh@1589 -- # true 00:08:09.157 01:45:38 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:08:09.157 01:45:38 -- common/autotest_common.sh@1593 -- # killprocess 4027222 00:08:09.157 01:45:38 -- common/autotest_common.sh@950 -- # '[' -z 4027222 ']' 00:08:09.157 01:45:38 -- common/autotest_common.sh@954 -- # kill -0 4027222 00:08:09.157 01:45:38 -- common/autotest_common.sh@955 -- # uname 00:08:09.157 01:45:38 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:09.157 01:45:38 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4027222 00:08:09.157 01:45:38 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:09.157 01:45:38 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:09.157 01:45:38 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4027222' 00:08:09.157 killing process with pid 4027222 00:08:09.157 01:45:38 -- common/autotest_common.sh@969 -- # kill 4027222 00:08:09.157 01:45:38 -- common/autotest_common.sh@974 -- # wait 4027222 00:08:13.351 01:45:42 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:13.351 01:45:42 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:13.351 01:45:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:13.351 01:45:42 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:13.351 01:45:42 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:13.351 01:45:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:13.351 01:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 01:45:42 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:13.351 01:45:42 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:08:13.351 01:45:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.351 01:45:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.351 01:45:42 -- common/autotest_common.sh@10 -- # set +x 00:08:13.351 ************************************ 00:08:13.351 START TEST env 00:08:13.351 ************************************ 00:08:13.351 01:45:42 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env.sh 00:08:13.351 * Looking for test storage... 00:08:13.351 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env 00:08:13.351 01:45:42 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:13.351 01:45:42 env -- common/autotest_common.sh@1681 -- # lcov --version 00:08:13.351 01:45:42 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:13.351 01:45:43 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:13.351 01:45:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.351 01:45:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.351 01:45:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.351 01:45:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.351 01:45:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.351 01:45:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.351 01:45:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.351 01:45:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.351 01:45:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.351 01:45:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.351 01:45:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.351 01:45:43 env -- scripts/common.sh@344 -- # case "$op" in 00:08:13.351 01:45:43 env -- scripts/common.sh@345 -- # : 1 00:08:13.351 01:45:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.351 01:45:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.351 01:45:43 env -- scripts/common.sh@365 -- # decimal 1 00:08:13.351 01:45:43 env -- scripts/common.sh@353 -- # local d=1 00:08:13.351 01:45:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.351 01:45:43 env -- scripts/common.sh@355 -- # echo 1 00:08:13.351 01:45:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.351 01:45:43 env -- scripts/common.sh@366 -- # decimal 2 00:08:13.612 01:45:43 env -- scripts/common.sh@353 -- # local d=2 00:08:13.612 01:45:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.612 01:45:43 env -- scripts/common.sh@355 -- # echo 2 00:08:13.612 01:45:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.612 01:45:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.612 01:45:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.612 01:45:43 env -- scripts/common.sh@368 -- # return 0 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:13.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.612 --rc genhtml_branch_coverage=1 00:08:13.612 --rc genhtml_function_coverage=1 00:08:13.612 --rc genhtml_legend=1 00:08:13.612 --rc geninfo_all_blocks=1 00:08:13.612 --rc geninfo_unexecuted_blocks=1 00:08:13.612 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.612 ' 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:13.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.612 --rc genhtml_branch_coverage=1 00:08:13.612 --rc genhtml_function_coverage=1 00:08:13.612 --rc genhtml_legend=1 00:08:13.612 --rc geninfo_all_blocks=1 00:08:13.612 --rc geninfo_unexecuted_blocks=1 00:08:13.612 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.612 ' 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:13.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.612 --rc genhtml_branch_coverage=1 00:08:13.612 --rc genhtml_function_coverage=1 00:08:13.612 --rc genhtml_legend=1 00:08:13.612 --rc geninfo_all_blocks=1 00:08:13.612 --rc geninfo_unexecuted_blocks=1 00:08:13.612 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.612 ' 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:13.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.612 --rc genhtml_branch_coverage=1 00:08:13.612 --rc genhtml_function_coverage=1 00:08:13.612 --rc genhtml_legend=1 00:08:13.612 --rc geninfo_all_blocks=1 00:08:13.612 --rc geninfo_unexecuted_blocks=1 00:08:13.612 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:13.612 ' 00:08:13.612 01:45:43 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.612 01:45:43 env -- common/autotest_common.sh@10 -- # set +x 00:08:13.612 ************************************ 00:08:13.612 START TEST env_memory 00:08:13.612 ************************************ 00:08:13.612 01:45:43 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/memory/memory_ut 00:08:13.612 00:08:13.612 00:08:13.612 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.612 http://cunit.sourceforge.net/ 00:08:13.612 00:08:13.612 00:08:13.612 Suite: memory 00:08:13.612 Test: alloc and free memory map ...[2024-10-09 01:45:43.095508] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:13.612 passed 00:08:13.612 Test: mem map translation ...[2024-10-09 01:45:43.108708] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:13.612 [2024-10-09 01:45:43.108724] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:13.612 [2024-10-09 01:45:43.108757] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:13.612 [2024-10-09 01:45:43.108766] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:13.612 passed 00:08:13.612 Test: mem map registration ...[2024-10-09 01:45:43.128966] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:13.612 [2024-10-09 01:45:43.128981] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:13.612 passed 00:08:13.612 Test: mem map adjacent registrations ...passed 00:08:13.612 00:08:13.612 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.612 suites 1 1 n/a 0 0 00:08:13.612 tests 4 4 4 0 0 00:08:13.612 asserts 152 152 152 0 n/a 00:08:13.612 00:08:13.612 Elapsed time = 0.083 seconds 00:08:13.612 00:08:13.612 real 0m0.097s 00:08:13.612 user 0m0.086s 00:08:13.612 sys 0m0.010s 00:08:13.612 01:45:43 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.612 01:45:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:13.612 ************************************ 00:08:13.612 END TEST env_memory 00:08:13.612 ************************************ 00:08:13.612 01:45:43 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.612 01:45:43 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.612 01:45:43 env -- common/autotest_common.sh@10 -- # set +x 00:08:13.612 ************************************ 00:08:13.612 START TEST env_vtophys 00:08:13.612 ************************************ 00:08:13.612 01:45:43 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:13.612 EAL: lib.eal log level changed from notice to debug 00:08:13.612 EAL: Detected lcore 0 as core 0 on socket 0 00:08:13.612 EAL: Detected lcore 1 as core 1 on socket 0 00:08:13.612 EAL: Detected lcore 2 as core 2 on socket 0 00:08:13.612 EAL: Detected lcore 3 as core 3 on socket 0 00:08:13.612 EAL: Detected lcore 4 as core 4 on socket 0 00:08:13.612 EAL: Detected lcore 5 as core 8 on socket 0 00:08:13.612 EAL: Detected lcore 6 as core 9 on socket 0 00:08:13.612 EAL: Detected lcore 7 as core 10 on socket 0 00:08:13.612 EAL: Detected lcore 8 as core 11 on socket 0 00:08:13.612 EAL: Detected lcore 9 as core 16 on socket 0 00:08:13.612 EAL: Detected lcore 10 as core 17 on socket 0 00:08:13.612 EAL: Detected lcore 11 as core 18 on socket 0 00:08:13.612 EAL: Detected lcore 12 as core 19 on socket 0 00:08:13.612 EAL: Detected lcore 13 as core 20 on socket 0 00:08:13.612 EAL: Detected lcore 14 as core 24 on socket 0 00:08:13.612 EAL: Detected lcore 15 as core 25 on socket 0 00:08:13.612 EAL: Detected lcore 16 as core 26 on socket 0 00:08:13.612 EAL: Detected lcore 17 as core 27 on socket 0 00:08:13.612 EAL: Detected lcore 18 as core 0 on socket 1 00:08:13.612 EAL: Detected lcore 19 as core 1 on socket 1 00:08:13.612 EAL: Detected lcore 20 as core 2 on socket 1 00:08:13.612 EAL: Detected lcore 21 as core 3 on socket 1 00:08:13.612 EAL: Detected lcore 22 as core 4 on socket 1 00:08:13.612 EAL: Detected lcore 23 as core 8 on socket 1 00:08:13.612 EAL: Detected lcore 24 as core 9 on socket 1 00:08:13.612 EAL: Detected lcore 25 as core 10 on socket 1 00:08:13.612 EAL: Detected lcore 26 as core 11 on socket 1 00:08:13.612 EAL: Detected lcore 27 as core 16 on socket 1 00:08:13.612 EAL: Detected lcore 28 as core 17 on socket 1 00:08:13.612 EAL: Detected lcore 29 as core 18 on socket 1 00:08:13.613 EAL: Detected lcore 30 as core 19 on socket 1 00:08:13.613 EAL: Detected lcore 31 as core 20 on socket 1 00:08:13.613 EAL: Detected lcore 32 as core 24 on socket 1 00:08:13.613 EAL: Detected lcore 33 as core 25 on socket 1 00:08:13.613 EAL: Detected lcore 34 as core 26 on socket 1 00:08:13.613 EAL: Detected lcore 35 as core 27 on socket 1 00:08:13.613 EAL: Detected lcore 36 as core 0 on socket 0 00:08:13.613 EAL: Detected lcore 37 as core 1 on socket 0 00:08:13.613 EAL: Detected lcore 38 as core 2 on socket 0 00:08:13.613 EAL: Detected lcore 39 as core 3 on socket 0 00:08:13.613 EAL: Detected lcore 40 as core 4 on socket 0 00:08:13.613 EAL: Detected lcore 41 as core 8 on socket 0 00:08:13.613 EAL: Detected lcore 42 as core 9 on socket 0 00:08:13.613 EAL: Detected lcore 43 as core 10 on socket 0 00:08:13.613 EAL: Detected lcore 44 as core 11 on socket 0 00:08:13.613 EAL: Detected lcore 45 as core 16 on socket 0 00:08:13.613 EAL: Detected lcore 46 as core 17 on socket 0 00:08:13.613 EAL: Detected lcore 47 as core 18 on socket 0 00:08:13.613 EAL: Detected lcore 48 as core 19 on socket 0 00:08:13.613 EAL: Detected lcore 49 as core 20 on socket 0 00:08:13.613 EAL: Detected lcore 50 as core 24 on socket 0 00:08:13.613 EAL: Detected lcore 51 as core 25 on socket 0 00:08:13.613 EAL: Detected lcore 52 as core 26 on socket 0 00:08:13.613 EAL: Detected lcore 53 as core 27 on socket 0 00:08:13.613 EAL: Detected lcore 54 as core 0 on socket 1 00:08:13.613 EAL: Detected lcore 55 as core 1 on socket 1 00:08:13.613 EAL: Detected lcore 56 as core 2 on socket 1 00:08:13.613 EAL: Detected lcore 57 as core 3 on socket 1 00:08:13.613 EAL: Detected lcore 58 as core 4 on socket 1 00:08:13.613 EAL: Detected lcore 59 as core 8 on socket 1 00:08:13.613 EAL: Detected lcore 60 as core 9 on socket 1 00:08:13.613 EAL: Detected lcore 61 as core 10 on socket 1 00:08:13.613 EAL: Detected lcore 62 as core 11 on socket 1 00:08:13.613 EAL: Detected lcore 63 as core 16 on socket 1 00:08:13.613 EAL: Detected lcore 64 as core 17 on socket 1 00:08:13.613 EAL: Detected lcore 65 as core 18 on socket 1 00:08:13.613 EAL: Detected lcore 66 as core 19 on socket 1 00:08:13.613 EAL: Detected lcore 67 as core 20 on socket 1 00:08:13.613 EAL: Detected lcore 68 as core 24 on socket 1 00:08:13.613 EAL: Detected lcore 69 as core 25 on socket 1 00:08:13.613 EAL: Detected lcore 70 as core 26 on socket 1 00:08:13.613 EAL: Detected lcore 71 as core 27 on socket 1 00:08:13.613 EAL: Maximum logical cores by configuration: 128 00:08:13.613 EAL: Detected CPU lcores: 72 00:08:13.613 EAL: Detected NUMA nodes: 2 00:08:13.613 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:13.613 EAL: Checking presence of .so 'librte_eal.so.24' 00:08:13.613 EAL: Checking presence of .so 'librte_eal.so' 00:08:13.613 EAL: Detected static linkage of DPDK 00:08:13.613 EAL: No shared files mode enabled, IPC will be disabled 00:08:13.873 EAL: Bus pci wants IOVA as 'DC' 00:08:13.873 EAL: Buses did not request a specific IOVA mode. 00:08:13.873 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:13.873 EAL: Selected IOVA mode 'VA' 00:08:13.873 EAL: Probing VFIO support... 00:08:13.873 EAL: IOMMU type 1 (Type 1) is supported 00:08:13.873 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:13.873 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:13.873 EAL: VFIO support initialized 00:08:13.873 EAL: Ask a virtual area of 0x2e000 bytes 00:08:13.873 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:13.873 EAL: Setting up physically contiguous memory... 00:08:13.873 EAL: Setting maximum number of open files to 524288 00:08:13.873 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:13.873 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:13.873 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:13.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.873 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:13.873 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:13.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.873 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:13.873 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:13.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.873 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:13.873 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:13.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.873 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:13.873 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:13.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.873 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:13.873 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:13.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.873 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:13.873 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:13.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.873 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:13.873 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:13.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.873 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:13.873 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:13.873 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:13.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.873 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:13.873 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:13.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.873 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:13.873 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:13.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.873 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:13.873 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:13.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.873 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:13.873 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:13.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.873 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:13.873 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:13.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.873 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:13.873 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:13.873 EAL: Ask a virtual area of 0x61000 bytes 00:08:13.873 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:13.873 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:13.873 EAL: Ask a virtual area of 0x400000000 bytes 00:08:13.873 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:13.873 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:13.873 EAL: Hugepages will be freed exactly as allocated. 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: TSC frequency is ~2300000 KHz 00:08:13.873 EAL: Main lcore 0 is ready (tid=7f334e846a00;cpuset=[0]) 00:08:13.873 EAL: Trying to obtain current memory policy. 00:08:13.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.873 EAL: Restoring previous memory policy: 0 00:08:13.873 EAL: request: mp_malloc_sync 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: Heap on socket 0 was expanded by 2MB 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: Mem event callback 'spdk:(nil)' registered 00:08:13.873 00:08:13.873 00:08:13.873 CUnit - A unit testing framework for C - Version 2.1-3 00:08:13.873 http://cunit.sourceforge.net/ 00:08:13.873 00:08:13.873 00:08:13.873 Suite: components_suite 00:08:13.873 Test: vtophys_malloc_test ...passed 00:08:13.873 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:13.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.873 EAL: Restoring previous memory policy: 4 00:08:13.873 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.873 EAL: request: mp_malloc_sync 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: Heap on socket 0 was expanded by 4MB 00:08:13.873 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.873 EAL: request: mp_malloc_sync 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: Heap on socket 0 was shrunk by 4MB 00:08:13.873 EAL: Trying to obtain current memory policy. 00:08:13.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.873 EAL: Restoring previous memory policy: 4 00:08:13.873 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.873 EAL: request: mp_malloc_sync 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: Heap on socket 0 was expanded by 6MB 00:08:13.873 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.873 EAL: request: mp_malloc_sync 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: Heap on socket 0 was shrunk by 6MB 00:08:13.873 EAL: Trying to obtain current memory policy. 00:08:13.873 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.873 EAL: Restoring previous memory policy: 4 00:08:13.873 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.873 EAL: request: mp_malloc_sync 00:08:13.873 EAL: No shared files mode enabled, IPC is disabled 00:08:13.873 EAL: Heap on socket 0 was expanded by 10MB 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was shrunk by 10MB 00:08:13.874 EAL: Trying to obtain current memory policy. 00:08:13.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.874 EAL: Restoring previous memory policy: 4 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was expanded by 18MB 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was shrunk by 18MB 00:08:13.874 EAL: Trying to obtain current memory policy. 00:08:13.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.874 EAL: Restoring previous memory policy: 4 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was expanded by 34MB 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was shrunk by 34MB 00:08:13.874 EAL: Trying to obtain current memory policy. 00:08:13.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.874 EAL: Restoring previous memory policy: 4 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was expanded by 66MB 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was shrunk by 66MB 00:08:13.874 EAL: Trying to obtain current memory policy. 00:08:13.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.874 EAL: Restoring previous memory policy: 4 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was expanded by 130MB 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was shrunk by 130MB 00:08:13.874 EAL: Trying to obtain current memory policy. 00:08:13.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:13.874 EAL: Restoring previous memory policy: 4 00:08:13.874 EAL: Calling mem event callback 'spdk:(nil)' 00:08:13.874 EAL: request: mp_malloc_sync 00:08:13.874 EAL: No shared files mode enabled, IPC is disabled 00:08:13.874 EAL: Heap on socket 0 was expanded by 258MB 00:08:14.133 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.133 EAL: request: mp_malloc_sync 00:08:14.133 EAL: No shared files mode enabled, IPC is disabled 00:08:14.133 EAL: Heap on socket 0 was shrunk by 258MB 00:08:14.133 EAL: Trying to obtain current memory policy. 00:08:14.133 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:14.133 EAL: Restoring previous memory policy: 4 00:08:14.133 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.133 EAL: request: mp_malloc_sync 00:08:14.133 EAL: No shared files mode enabled, IPC is disabled 00:08:14.133 EAL: Heap on socket 0 was expanded by 514MB 00:08:14.393 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.393 EAL: request: mp_malloc_sync 00:08:14.393 EAL: No shared files mode enabled, IPC is disabled 00:08:14.393 EAL: Heap on socket 0 was shrunk by 514MB 00:08:14.393 EAL: Trying to obtain current memory policy. 00:08:14.393 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:14.652 EAL: Restoring previous memory policy: 4 00:08:14.652 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.652 EAL: request: mp_malloc_sync 00:08:14.652 EAL: No shared files mode enabled, IPC is disabled 00:08:14.652 EAL: Heap on socket 0 was expanded by 1026MB 00:08:14.652 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.911 EAL: request: mp_malloc_sync 00:08:14.911 EAL: No shared files mode enabled, IPC is disabled 00:08:14.911 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:14.911 passed 00:08:14.911 00:08:14.911 Run Summary: Type Total Ran Passed Failed Inactive 00:08:14.911 suites 1 1 n/a 0 0 00:08:14.911 tests 2 2 2 0 0 00:08:14.911 asserts 497 497 497 0 n/a 00:08:14.911 00:08:14.911 Elapsed time = 1.107 seconds 00:08:14.911 EAL: Calling mem event callback 'spdk:(nil)' 00:08:14.911 EAL: request: mp_malloc_sync 00:08:14.911 EAL: No shared files mode enabled, IPC is disabled 00:08:14.911 EAL: Heap on socket 0 was shrunk by 2MB 00:08:14.911 EAL: No shared files mode enabled, IPC is disabled 00:08:14.911 EAL: No shared files mode enabled, IPC is disabled 00:08:14.911 EAL: No shared files mode enabled, IPC is disabled 00:08:14.911 00:08:14.911 real 0m1.237s 00:08:14.911 user 0m0.704s 00:08:14.911 sys 0m0.506s 00:08:14.911 01:45:44 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.911 01:45:44 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:14.911 ************************************ 00:08:14.911 END TEST env_vtophys 00:08:14.911 ************************************ 00:08:14.912 01:45:44 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:08:14.912 01:45:44 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.912 01:45:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.912 01:45:44 env -- common/autotest_common.sh@10 -- # set +x 00:08:14.912 ************************************ 00:08:14.912 START TEST env_pci 00:08:14.912 ************************************ 00:08:14.912 01:45:44 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/pci/pci_ut 00:08:14.912 00:08:14.912 00:08:14.912 CUnit - A unit testing framework for C - Version 2.1-3 00:08:14.912 http://cunit.sourceforge.net/ 00:08:14.912 00:08:14.912 00:08:14.912 Suite: pci 00:08:15.171 Test: pci_hook ...[2024-10-09 01:45:44.579514] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/pci.c:1050:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 4028541 has claimed it 00:08:15.171 EAL: Cannot find device (10000:00:01.0) 00:08:15.171 EAL: Failed to attach device on primary process 00:08:15.171 passed 00:08:15.171 00:08:15.171 Run Summary: Type Total Ran Passed Failed Inactive 00:08:15.171 suites 1 1 n/a 0 0 00:08:15.171 tests 1 1 1 0 0 00:08:15.171 asserts 25 25 25 0 n/a 00:08:15.171 00:08:15.171 Elapsed time = 0.037 seconds 00:08:15.171 00:08:15.171 real 0m0.058s 00:08:15.171 user 0m0.013s 00:08:15.171 sys 0m0.045s 00:08:15.171 01:45:44 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.171 01:45:44 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:15.171 ************************************ 00:08:15.171 END TEST env_pci 00:08:15.171 ************************************ 00:08:15.171 01:45:44 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:15.171 01:45:44 env -- env/env.sh@15 -- # uname 00:08:15.171 01:45:44 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:15.171 01:45:44 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:15.171 01:45:44 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:15.171 01:45:44 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:15.171 01:45:44 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.171 01:45:44 env -- common/autotest_common.sh@10 -- # set +x 00:08:15.171 ************************************ 00:08:15.171 START TEST env_dpdk_post_init 00:08:15.171 ************************************ 00:08:15.171 01:45:44 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:15.171 EAL: Detected CPU lcores: 72 00:08:15.171 EAL: Detected NUMA nodes: 2 00:08:15.171 EAL: Detected static linkage of DPDK 00:08:15.171 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:15.171 EAL: Selected IOVA mode 'VA' 00:08:15.171 EAL: VFIO support initialized 00:08:15.171 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:15.431 EAL: Using IOMMU type 1 (Type 1) 00:08:15.999 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:1a:00.0 (socket 0) 00:08:21.270 EAL: Releasing PCI mapped resource for 0000:1a:00.0 00:08:21.270 EAL: Calling pci_unmap_resource for 0000:1a:00.0 at 0x202001000000 00:08:21.838 Starting DPDK initialization... 00:08:21.838 Starting SPDK post initialization... 00:08:21.838 SPDK NVMe probe 00:08:21.838 Attaching to 0000:1a:00.0 00:08:21.838 Attached to 0000:1a:00.0 00:08:21.838 Cleaning up... 00:08:21.838 00:08:21.838 real 0m6.518s 00:08:21.838 user 0m4.749s 00:08:21.838 sys 0m1.020s 00:08:21.838 01:45:51 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.838 01:45:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:21.838 ************************************ 00:08:21.838 END TEST env_dpdk_post_init 00:08:21.838 ************************************ 00:08:21.838 01:45:51 env -- env/env.sh@26 -- # uname 00:08:21.838 01:45:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:21.838 01:45:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:21.838 01:45:51 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.838 01:45:51 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.838 01:45:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:21.838 ************************************ 00:08:21.838 START TEST env_mem_callbacks 00:08:21.838 ************************************ 00:08:21.838 01:45:51 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:21.838 EAL: Detected CPU lcores: 72 00:08:21.838 EAL: Detected NUMA nodes: 2 00:08:21.838 EAL: Detected static linkage of DPDK 00:08:21.838 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:21.838 EAL: Selected IOVA mode 'VA' 00:08:21.838 EAL: VFIO support initialized 00:08:21.838 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:21.838 00:08:21.838 00:08:21.838 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.838 http://cunit.sourceforge.net/ 00:08:21.838 00:08:21.838 00:08:21.838 Suite: memory 00:08:21.838 Test: test ... 00:08:21.838 register 0x200000200000 2097152 00:08:21.838 malloc 3145728 00:08:21.838 register 0x200000400000 4194304 00:08:21.838 buf 0x200000500000 len 3145728 PASSED 00:08:21.838 malloc 64 00:08:21.838 buf 0x2000004fff40 len 64 PASSED 00:08:21.838 malloc 4194304 00:08:21.838 register 0x200000800000 6291456 00:08:21.838 buf 0x200000a00000 len 4194304 PASSED 00:08:21.838 free 0x200000500000 3145728 00:08:21.838 free 0x2000004fff40 64 00:08:21.838 unregister 0x200000400000 4194304 PASSED 00:08:21.838 free 0x200000a00000 4194304 00:08:21.838 unregister 0x200000800000 6291456 PASSED 00:08:21.838 malloc 8388608 00:08:21.838 register 0x200000400000 10485760 00:08:21.838 buf 0x200000600000 len 8388608 PASSED 00:08:21.838 free 0x200000600000 8388608 00:08:21.838 unregister 0x200000400000 10485760 PASSED 00:08:21.838 passed 00:08:21.838 00:08:21.838 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.838 suites 1 1 n/a 0 0 00:08:21.838 tests 1 1 1 0 0 00:08:21.838 asserts 15 15 15 0 n/a 00:08:21.838 00:08:21.838 Elapsed time = 0.005 seconds 00:08:21.838 00:08:21.838 real 0m0.055s 00:08:21.838 user 0m0.012s 00:08:21.838 sys 0m0.043s 00:08:21.838 01:45:51 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.838 01:45:51 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:21.838 ************************************ 00:08:21.838 END TEST env_mem_callbacks 00:08:21.838 ************************************ 00:08:21.838 00:08:21.838 real 0m8.556s 00:08:21.838 user 0m5.829s 00:08:21.838 sys 0m1.998s 00:08:21.838 01:45:51 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.838 01:45:51 env -- common/autotest_common.sh@10 -- # set +x 00:08:21.838 ************************************ 00:08:21.838 END TEST env 00:08:21.838 ************************************ 00:08:21.838 01:45:51 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:08:21.838 01:45:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.838 01:45:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.838 01:45:51 -- common/autotest_common.sh@10 -- # set +x 00:08:21.838 ************************************ 00:08:21.838 START TEST rpc 00:08:21.838 ************************************ 00:08:21.838 01:45:51 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/rpc.sh 00:08:22.098 * Looking for test storage... 00:08:22.098 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:22.098 01:45:51 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.098 01:45:51 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.098 01:45:51 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.098 01:45:51 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.098 01:45:51 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.098 01:45:51 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.098 01:45:51 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.098 01:45:51 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.098 01:45:51 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.098 01:45:51 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.098 01:45:51 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.098 01:45:51 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:22.098 01:45:51 rpc -- scripts/common.sh@345 -- # : 1 00:08:22.098 01:45:51 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.098 01:45:51 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.098 01:45:51 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:22.098 01:45:51 rpc -- scripts/common.sh@353 -- # local d=1 00:08:22.098 01:45:51 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.098 01:45:51 rpc -- scripts/common.sh@355 -- # echo 1 00:08:22.098 01:45:51 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.098 01:45:51 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:22.098 01:45:51 rpc -- scripts/common.sh@353 -- # local d=2 00:08:22.098 01:45:51 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.098 01:45:51 rpc -- scripts/common.sh@355 -- # echo 2 00:08:22.098 01:45:51 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.098 01:45:51 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.098 01:45:51 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.098 01:45:51 rpc -- scripts/common.sh@368 -- # return 0 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:22.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.098 --rc genhtml_branch_coverage=1 00:08:22.098 --rc genhtml_function_coverage=1 00:08:22.098 --rc genhtml_legend=1 00:08:22.098 --rc geninfo_all_blocks=1 00:08:22.098 --rc geninfo_unexecuted_blocks=1 00:08:22.098 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:22.098 ' 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:22.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.098 --rc genhtml_branch_coverage=1 00:08:22.098 --rc genhtml_function_coverage=1 00:08:22.098 --rc genhtml_legend=1 00:08:22.098 --rc geninfo_all_blocks=1 00:08:22.098 --rc geninfo_unexecuted_blocks=1 00:08:22.098 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:22.098 ' 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:22.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.098 --rc genhtml_branch_coverage=1 00:08:22.098 --rc genhtml_function_coverage=1 00:08:22.098 --rc genhtml_legend=1 00:08:22.098 --rc geninfo_all_blocks=1 00:08:22.098 --rc geninfo_unexecuted_blocks=1 00:08:22.098 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:22.098 ' 00:08:22.098 01:45:51 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:22.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.098 --rc genhtml_branch_coverage=1 00:08:22.098 --rc genhtml_function_coverage=1 00:08:22.098 --rc genhtml_legend=1 00:08:22.098 --rc geninfo_all_blocks=1 00:08:22.098 --rc geninfo_unexecuted_blocks=1 00:08:22.099 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:22.099 ' 00:08:22.099 01:45:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=4029700 00:08:22.099 01:45:51 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:22.099 01:45:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:22.099 01:45:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 4029700 00:08:22.099 01:45:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 4029700 ']' 00:08:22.099 01:45:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.099 01:45:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:22.099 01:45:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.099 01:45:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:22.099 01:45:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.099 [2024-10-09 01:45:51.689903] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:22.099 [2024-10-09 01:45:51.689974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4029700 ] 00:08:22.099 [2024-10-09 01:45:51.760906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.358 [2024-10-09 01:45:51.809757] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:22.358 [2024-10-09 01:45:51.809798] app.c: 614:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 4029700' to capture a snapshot of events at runtime. 00:08:22.358 [2024-10-09 01:45:51.809808] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.358 [2024-10-09 01:45:51.809822] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.358 [2024-10-09 01:45:51.809830] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid4029700 for offline analysis/debug. 00:08:22.358 [2024-10-09 01:45:51.810324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.617 01:45:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.617 01:45:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:08:22.617 01:45:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:22.617 01:45:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:22.617 01:45:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:08:22.617 01:45:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:08:22.617 01:45:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.617 01:45:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.617 01:45:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.617 ************************************ 00:08:22.617 START TEST rpc_integrity 00:08:22.617 ************************************ 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:22.617 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.617 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:22.617 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:22.617 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:22.617 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.617 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:08:22.617 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.617 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.617 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:22.617 { 00:08:22.617 "name": "Malloc0", 00:08:22.617 "aliases": [ 00:08:22.617 "c2d7cf79-d4de-42e4-a603-0b54c4139c4c" 00:08:22.617 ], 00:08:22.617 "product_name": "Malloc disk", 00:08:22.617 "block_size": 512, 00:08:22.617 "num_blocks": 16384, 00:08:22.617 "uuid": "c2d7cf79-d4de-42e4-a603-0b54c4139c4c", 00:08:22.617 "assigned_rate_limits": { 00:08:22.617 "rw_ios_per_sec": 0, 00:08:22.617 "rw_mbytes_per_sec": 0, 00:08:22.617 "r_mbytes_per_sec": 0, 00:08:22.617 "w_mbytes_per_sec": 0 00:08:22.617 }, 00:08:22.617 "claimed": false, 00:08:22.617 "zoned": false, 00:08:22.617 "supported_io_types": { 00:08:22.617 "read": true, 00:08:22.617 "write": true, 00:08:22.617 "unmap": true, 00:08:22.617 "flush": true, 00:08:22.617 "reset": true, 00:08:22.617 "nvme_admin": false, 00:08:22.617 "nvme_io": false, 00:08:22.617 "nvme_io_md": false, 00:08:22.617 "write_zeroes": true, 00:08:22.617 "zcopy": true, 00:08:22.617 "get_zone_info": false, 00:08:22.617 "zone_management": false, 00:08:22.617 "zone_append": false, 00:08:22.617 "compare": false, 00:08:22.617 "compare_and_write": false, 00:08:22.617 "abort": true, 00:08:22.617 "seek_hole": false, 00:08:22.617 "seek_data": false, 00:08:22.617 "copy": true, 00:08:22.617 "nvme_iov_md": false 00:08:22.617 }, 00:08:22.617 "memory_domains": [ 00:08:22.617 { 00:08:22.618 "dma_device_id": "system", 00:08:22.618 "dma_device_type": 1 00:08:22.618 }, 00:08:22.618 { 00:08:22.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.618 "dma_device_type": 2 00:08:22.618 } 00:08:22.618 ], 00:08:22.618 "driver_specific": {} 00:08:22.618 } 00:08:22.618 ]' 00:08:22.618 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:22.618 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:22.618 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:08:22.618 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.618 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.618 [2024-10-09 01:45:52.213804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:08:22.618 [2024-10-09 01:45:52.213846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.618 [2024-10-09 01:45:52.213866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5302f70 00:08:22.618 [2024-10-09 01:45:52.213876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.618 [2024-10-09 01:45:52.214832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.618 [2024-10-09 01:45:52.214859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:22.618 Passthru0 00:08:22.618 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.618 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:22.618 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.618 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.618 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.618 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:22.618 { 00:08:22.618 "name": "Malloc0", 00:08:22.618 "aliases": [ 00:08:22.618 "c2d7cf79-d4de-42e4-a603-0b54c4139c4c" 00:08:22.618 ], 00:08:22.618 "product_name": "Malloc disk", 00:08:22.618 "block_size": 512, 00:08:22.618 "num_blocks": 16384, 00:08:22.618 "uuid": "c2d7cf79-d4de-42e4-a603-0b54c4139c4c", 00:08:22.618 "assigned_rate_limits": { 00:08:22.618 "rw_ios_per_sec": 0, 00:08:22.618 "rw_mbytes_per_sec": 0, 00:08:22.618 "r_mbytes_per_sec": 0, 00:08:22.618 "w_mbytes_per_sec": 0 00:08:22.618 }, 00:08:22.618 "claimed": true, 00:08:22.618 "claim_type": "exclusive_write", 00:08:22.618 "zoned": false, 00:08:22.618 "supported_io_types": { 00:08:22.618 "read": true, 00:08:22.618 "write": true, 00:08:22.618 "unmap": true, 00:08:22.618 "flush": true, 00:08:22.618 "reset": true, 00:08:22.618 "nvme_admin": false, 00:08:22.618 "nvme_io": false, 00:08:22.618 "nvme_io_md": false, 00:08:22.618 "write_zeroes": true, 00:08:22.618 "zcopy": true, 00:08:22.618 "get_zone_info": false, 00:08:22.618 "zone_management": false, 00:08:22.618 "zone_append": false, 00:08:22.618 "compare": false, 00:08:22.618 "compare_and_write": false, 00:08:22.618 "abort": true, 00:08:22.618 "seek_hole": false, 00:08:22.618 "seek_data": false, 00:08:22.618 "copy": true, 00:08:22.618 "nvme_iov_md": false 00:08:22.618 }, 00:08:22.618 "memory_domains": [ 00:08:22.618 { 00:08:22.618 "dma_device_id": "system", 00:08:22.618 "dma_device_type": 1 00:08:22.618 }, 00:08:22.618 { 00:08:22.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.618 "dma_device_type": 2 00:08:22.618 } 00:08:22.618 ], 00:08:22.618 "driver_specific": {} 00:08:22.618 }, 00:08:22.618 { 00:08:22.618 "name": "Passthru0", 00:08:22.618 "aliases": [ 00:08:22.618 "9fcb085a-1042-5b32-835b-7b5d23ed5f10" 00:08:22.618 ], 00:08:22.618 "product_name": "passthru", 00:08:22.618 "block_size": 512, 00:08:22.618 "num_blocks": 16384, 00:08:22.618 "uuid": "9fcb085a-1042-5b32-835b-7b5d23ed5f10", 00:08:22.618 "assigned_rate_limits": { 00:08:22.618 "rw_ios_per_sec": 0, 00:08:22.618 "rw_mbytes_per_sec": 0, 00:08:22.618 "r_mbytes_per_sec": 0, 00:08:22.618 "w_mbytes_per_sec": 0 00:08:22.618 }, 00:08:22.618 "claimed": false, 00:08:22.618 "zoned": false, 00:08:22.618 "supported_io_types": { 00:08:22.618 "read": true, 00:08:22.618 "write": true, 00:08:22.618 "unmap": true, 00:08:22.618 "flush": true, 00:08:22.618 "reset": true, 00:08:22.618 "nvme_admin": false, 00:08:22.618 "nvme_io": false, 00:08:22.618 "nvme_io_md": false, 00:08:22.618 "write_zeroes": true, 00:08:22.618 "zcopy": true, 00:08:22.618 "get_zone_info": false, 00:08:22.618 "zone_management": false, 00:08:22.618 "zone_append": false, 00:08:22.618 "compare": false, 00:08:22.618 "compare_and_write": false, 00:08:22.618 "abort": true, 00:08:22.618 "seek_hole": false, 00:08:22.618 "seek_data": false, 00:08:22.618 "copy": true, 00:08:22.618 "nvme_iov_md": false 00:08:22.618 }, 00:08:22.618 "memory_domains": [ 00:08:22.618 { 00:08:22.618 "dma_device_id": "system", 00:08:22.618 "dma_device_type": 1 00:08:22.618 }, 00:08:22.618 { 00:08:22.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.618 "dma_device_type": 2 00:08:22.618 } 00:08:22.618 ], 00:08:22.618 "driver_specific": { 00:08:22.618 "passthru": { 00:08:22.618 "name": "Passthru0", 00:08:22.618 "base_bdev_name": "Malloc0" 00:08:22.618 } 00:08:22.618 } 00:08:22.618 } 00:08:22.618 ]' 00:08:22.618 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:22.877 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:22.877 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.877 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.877 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.877 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:22.877 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:22.877 01:45:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:22.877 00:08:22.877 real 0m0.283s 00:08:22.877 user 0m0.167s 00:08:22.877 sys 0m0.056s 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.877 01:45:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 ************************************ 00:08:22.877 END TEST rpc_integrity 00:08:22.877 ************************************ 00:08:22.877 01:45:52 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:08:22.877 01:45:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.877 01:45:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.877 01:45:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 ************************************ 00:08:22.877 START TEST rpc_plugins 00:08:22.877 ************************************ 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:08:22.877 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.877 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:08:22.877 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.877 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:08:22.877 { 00:08:22.877 "name": "Malloc1", 00:08:22.877 "aliases": [ 00:08:22.877 "953b1ecb-7ca6-47e1-9ef7-144e7bfef764" 00:08:22.877 ], 00:08:22.877 "product_name": "Malloc disk", 00:08:22.877 "block_size": 4096, 00:08:22.877 "num_blocks": 256, 00:08:22.877 "uuid": "953b1ecb-7ca6-47e1-9ef7-144e7bfef764", 00:08:22.877 "assigned_rate_limits": { 00:08:22.877 "rw_ios_per_sec": 0, 00:08:22.877 "rw_mbytes_per_sec": 0, 00:08:22.877 "r_mbytes_per_sec": 0, 00:08:22.877 "w_mbytes_per_sec": 0 00:08:22.877 }, 00:08:22.877 "claimed": false, 00:08:22.877 "zoned": false, 00:08:22.877 "supported_io_types": { 00:08:22.877 "read": true, 00:08:22.877 "write": true, 00:08:22.877 "unmap": true, 00:08:22.877 "flush": true, 00:08:22.877 "reset": true, 00:08:22.877 "nvme_admin": false, 00:08:22.877 "nvme_io": false, 00:08:22.877 "nvme_io_md": false, 00:08:22.877 "write_zeroes": true, 00:08:22.877 "zcopy": true, 00:08:22.877 "get_zone_info": false, 00:08:22.877 "zone_management": false, 00:08:22.877 "zone_append": false, 00:08:22.877 "compare": false, 00:08:22.877 "compare_and_write": false, 00:08:22.877 "abort": true, 00:08:22.877 "seek_hole": false, 00:08:22.877 "seek_data": false, 00:08:22.877 "copy": true, 00:08:22.877 "nvme_iov_md": false 00:08:22.877 }, 00:08:22.877 "memory_domains": [ 00:08:22.877 { 00:08:22.877 "dma_device_id": "system", 00:08:22.877 "dma_device_type": 1 00:08:22.877 }, 00:08:22.877 { 00:08:22.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.877 "dma_device_type": 2 00:08:22.877 } 00:08:22.877 ], 00:08:22.877 "driver_specific": {} 00:08:22.877 } 00:08:22.877 ]' 00:08:22.877 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:08:22.877 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:08:22.877 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.877 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:22.877 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.136 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:08:23.136 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:08:23.136 01:45:52 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:08:23.136 00:08:23.136 real 0m0.147s 00:08:23.136 user 0m0.090s 00:08:23.136 sys 0m0.026s 00:08:23.136 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.136 01:45:52 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:08:23.136 ************************************ 00:08:23.136 END TEST rpc_plugins 00:08:23.136 ************************************ 00:08:23.136 01:45:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:08:23.136 01:45:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.137 01:45:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.137 01:45:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.137 ************************************ 00:08:23.137 START TEST rpc_trace_cmd_test 00:08:23.137 ************************************ 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:08:23.137 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid4029700", 00:08:23.137 "tpoint_group_mask": "0x8", 00:08:23.137 "iscsi_conn": { 00:08:23.137 "mask": "0x2", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "scsi": { 00:08:23.137 "mask": "0x4", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "bdev": { 00:08:23.137 "mask": "0x8", 00:08:23.137 "tpoint_mask": "0xffffffffffffffff" 00:08:23.137 }, 00:08:23.137 "nvmf_rdma": { 00:08:23.137 "mask": "0x10", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "nvmf_tcp": { 00:08:23.137 "mask": "0x20", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "ftl": { 00:08:23.137 "mask": "0x40", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "blobfs": { 00:08:23.137 "mask": "0x80", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "dsa": { 00:08:23.137 "mask": "0x200", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "thread": { 00:08:23.137 "mask": "0x400", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "nvme_pcie": { 00:08:23.137 "mask": "0x800", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "iaa": { 00:08:23.137 "mask": "0x1000", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "nvme_tcp": { 00:08:23.137 "mask": "0x2000", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "bdev_nvme": { 00:08:23.137 "mask": "0x4000", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "sock": { 00:08:23.137 "mask": "0x8000", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "blob": { 00:08:23.137 "mask": "0x10000", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "bdev_raid": { 00:08:23.137 "mask": "0x20000", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 }, 00:08:23.137 "scheduler": { 00:08:23.137 "mask": "0x40000", 00:08:23.137 "tpoint_mask": "0x0" 00:08:23.137 } 00:08:23.137 }' 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:08:23.137 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:08:23.396 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:08:23.396 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:08:23.396 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:08:23.396 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:08:23.396 01:45:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:08:23.396 00:08:23.396 real 0m0.215s 00:08:23.396 user 0m0.177s 00:08:23.396 sys 0m0.027s 00:08:23.396 01:45:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.396 01:45:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.396 ************************************ 00:08:23.396 END TEST rpc_trace_cmd_test 00:08:23.396 ************************************ 00:08:23.396 01:45:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:08:23.396 01:45:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:08:23.396 01:45:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:08:23.396 01:45:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.396 01:45:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.396 01:45:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.396 ************************************ 00:08:23.396 START TEST rpc_daemon_integrity 00:08:23.396 ************************************ 00:08:23.396 01:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:08:23.396 01:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:08:23.396 01:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.396 01:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.396 01:45:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.396 01:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:08:23.396 01:45:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.396 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:08:23.396 { 00:08:23.396 "name": "Malloc2", 00:08:23.396 "aliases": [ 00:08:23.396 "5ef34759-51e7-4bad-90b7-728c3f6249e7" 00:08:23.396 ], 00:08:23.396 "product_name": "Malloc disk", 00:08:23.396 "block_size": 512, 00:08:23.396 "num_blocks": 16384, 00:08:23.396 "uuid": "5ef34759-51e7-4bad-90b7-728c3f6249e7", 00:08:23.396 "assigned_rate_limits": { 00:08:23.396 "rw_ios_per_sec": 0, 00:08:23.396 "rw_mbytes_per_sec": 0, 00:08:23.396 "r_mbytes_per_sec": 0, 00:08:23.396 "w_mbytes_per_sec": 0 00:08:23.396 }, 00:08:23.396 "claimed": false, 00:08:23.396 "zoned": false, 00:08:23.396 "supported_io_types": { 00:08:23.396 "read": true, 00:08:23.396 "write": true, 00:08:23.396 "unmap": true, 00:08:23.396 "flush": true, 00:08:23.396 "reset": true, 00:08:23.396 "nvme_admin": false, 00:08:23.396 "nvme_io": false, 00:08:23.396 "nvme_io_md": false, 00:08:23.396 "write_zeroes": true, 00:08:23.396 "zcopy": true, 00:08:23.396 "get_zone_info": false, 00:08:23.396 "zone_management": false, 00:08:23.396 "zone_append": false, 00:08:23.396 "compare": false, 00:08:23.396 "compare_and_write": false, 00:08:23.396 "abort": true, 00:08:23.396 "seek_hole": false, 00:08:23.396 "seek_data": false, 00:08:23.397 "copy": true, 00:08:23.397 "nvme_iov_md": false 00:08:23.397 }, 00:08:23.397 "memory_domains": [ 00:08:23.397 { 00:08:23.397 "dma_device_id": "system", 00:08:23.397 "dma_device_type": 1 00:08:23.397 }, 00:08:23.397 { 00:08:23.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.397 "dma_device_type": 2 00:08:23.397 } 00:08:23.397 ], 00:08:23.397 "driver_specific": {} 00:08:23.397 } 00:08:23.397 ]' 00:08:23.397 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.656 [2024-10-09 01:45:53.092083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:08:23.656 [2024-10-09 01:45:53.092120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:23.656 [2024-10-09 01:45:53.092139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5425330 00:08:23.656 [2024-10-09 01:45:53.092148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:23.656 [2024-10-09 01:45:53.093102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:23.656 [2024-10-09 01:45:53.093129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:08:23.656 Passthru0 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.656 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:08:23.656 { 00:08:23.656 "name": "Malloc2", 00:08:23.656 "aliases": [ 00:08:23.656 "5ef34759-51e7-4bad-90b7-728c3f6249e7" 00:08:23.656 ], 00:08:23.656 "product_name": "Malloc disk", 00:08:23.656 "block_size": 512, 00:08:23.656 "num_blocks": 16384, 00:08:23.656 "uuid": "5ef34759-51e7-4bad-90b7-728c3f6249e7", 00:08:23.656 "assigned_rate_limits": { 00:08:23.656 "rw_ios_per_sec": 0, 00:08:23.656 "rw_mbytes_per_sec": 0, 00:08:23.656 "r_mbytes_per_sec": 0, 00:08:23.656 "w_mbytes_per_sec": 0 00:08:23.656 }, 00:08:23.656 "claimed": true, 00:08:23.656 "claim_type": "exclusive_write", 00:08:23.656 "zoned": false, 00:08:23.656 "supported_io_types": { 00:08:23.656 "read": true, 00:08:23.656 "write": true, 00:08:23.656 "unmap": true, 00:08:23.656 "flush": true, 00:08:23.656 "reset": true, 00:08:23.656 "nvme_admin": false, 00:08:23.656 "nvme_io": false, 00:08:23.656 "nvme_io_md": false, 00:08:23.656 "write_zeroes": true, 00:08:23.656 "zcopy": true, 00:08:23.656 "get_zone_info": false, 00:08:23.656 "zone_management": false, 00:08:23.656 "zone_append": false, 00:08:23.656 "compare": false, 00:08:23.656 "compare_and_write": false, 00:08:23.656 "abort": true, 00:08:23.656 "seek_hole": false, 00:08:23.656 "seek_data": false, 00:08:23.656 "copy": true, 00:08:23.656 "nvme_iov_md": false 00:08:23.656 }, 00:08:23.656 "memory_domains": [ 00:08:23.656 { 00:08:23.656 "dma_device_id": "system", 00:08:23.656 "dma_device_type": 1 00:08:23.656 }, 00:08:23.656 { 00:08:23.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.656 "dma_device_type": 2 00:08:23.656 } 00:08:23.656 ], 00:08:23.656 "driver_specific": {} 00:08:23.656 }, 00:08:23.656 { 00:08:23.656 "name": "Passthru0", 00:08:23.656 "aliases": [ 00:08:23.656 "ad05b251-a964-5bc9-b1d9-e64e00f260fe" 00:08:23.656 ], 00:08:23.656 "product_name": "passthru", 00:08:23.656 "block_size": 512, 00:08:23.656 "num_blocks": 16384, 00:08:23.657 "uuid": "ad05b251-a964-5bc9-b1d9-e64e00f260fe", 00:08:23.657 "assigned_rate_limits": { 00:08:23.657 "rw_ios_per_sec": 0, 00:08:23.657 "rw_mbytes_per_sec": 0, 00:08:23.657 "r_mbytes_per_sec": 0, 00:08:23.657 "w_mbytes_per_sec": 0 00:08:23.657 }, 00:08:23.657 "claimed": false, 00:08:23.657 "zoned": false, 00:08:23.657 "supported_io_types": { 00:08:23.657 "read": true, 00:08:23.657 "write": true, 00:08:23.657 "unmap": true, 00:08:23.657 "flush": true, 00:08:23.657 "reset": true, 00:08:23.657 "nvme_admin": false, 00:08:23.657 "nvme_io": false, 00:08:23.657 "nvme_io_md": false, 00:08:23.657 "write_zeroes": true, 00:08:23.657 "zcopy": true, 00:08:23.657 "get_zone_info": false, 00:08:23.657 "zone_management": false, 00:08:23.657 "zone_append": false, 00:08:23.657 "compare": false, 00:08:23.657 "compare_and_write": false, 00:08:23.657 "abort": true, 00:08:23.657 "seek_hole": false, 00:08:23.657 "seek_data": false, 00:08:23.657 "copy": true, 00:08:23.657 "nvme_iov_md": false 00:08:23.657 }, 00:08:23.657 "memory_domains": [ 00:08:23.657 { 00:08:23.657 "dma_device_id": "system", 00:08:23.657 "dma_device_type": 1 00:08:23.657 }, 00:08:23.657 { 00:08:23.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.657 "dma_device_type": 2 00:08:23.657 } 00:08:23.657 ], 00:08:23.657 "driver_specific": { 00:08:23.657 "passthru": { 00:08:23.657 "name": "Passthru0", 00:08:23.657 "base_bdev_name": "Malloc2" 00:08:23.657 } 00:08:23.657 } 00:08:23.657 } 00:08:23.657 ]' 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:08:23.657 00:08:23.657 real 0m0.276s 00:08:23.657 user 0m0.174s 00:08:23.657 sys 0m0.051s 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.657 01:45:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:08:23.657 ************************************ 00:08:23.657 END TEST rpc_daemon_integrity 00:08:23.657 ************************************ 00:08:23.657 01:45:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:08:23.657 01:45:53 rpc -- rpc/rpc.sh@84 -- # killprocess 4029700 00:08:23.657 01:45:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 4029700 ']' 00:08:23.657 01:45:53 rpc -- common/autotest_common.sh@954 -- # kill -0 4029700 00:08:23.657 01:45:53 rpc -- common/autotest_common.sh@955 -- # uname 00:08:23.657 01:45:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.657 01:45:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4029700 00:08:23.916 01:45:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.916 01:45:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.916 01:45:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4029700' 00:08:23.916 killing process with pid 4029700 00:08:23.916 01:45:53 rpc -- common/autotest_common.sh@969 -- # kill 4029700 00:08:23.916 01:45:53 rpc -- common/autotest_common.sh@974 -- # wait 4029700 00:08:24.176 00:08:24.176 real 0m2.204s 00:08:24.176 user 0m2.734s 00:08:24.176 sys 0m0.829s 00:08:24.176 01:45:53 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.176 01:45:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.176 ************************************ 00:08:24.176 END TEST rpc 00:08:24.176 ************************************ 00:08:24.176 01:45:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:24.176 01:45:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.176 01:45:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.176 01:45:53 -- common/autotest_common.sh@10 -- # set +x 00:08:24.176 ************************************ 00:08:24.176 START TEST skip_rpc 00:08:24.176 ************************************ 00:08:24.176 01:45:53 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:08:24.435 * Looking for test storage... 00:08:24.435 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc 00:08:24.435 01:45:53 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:24.435 01:45:53 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:24.435 01:45:53 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:24.435 01:45:53 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:08:24.435 01:45:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.436 01:45:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:08:24.436 01:45:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.436 01:45:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.436 01:45:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.436 01:45:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:08:24.436 01:45:53 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.436 01:45:53 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:24.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.436 --rc genhtml_branch_coverage=1 00:08:24.436 --rc genhtml_function_coverage=1 00:08:24.436 --rc genhtml_legend=1 00:08:24.436 --rc geninfo_all_blocks=1 00:08:24.436 --rc geninfo_unexecuted_blocks=1 00:08:24.436 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.436 ' 00:08:24.436 01:45:53 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:24.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.436 --rc genhtml_branch_coverage=1 00:08:24.436 --rc genhtml_function_coverage=1 00:08:24.436 --rc genhtml_legend=1 00:08:24.436 --rc geninfo_all_blocks=1 00:08:24.436 --rc geninfo_unexecuted_blocks=1 00:08:24.436 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.436 ' 00:08:24.436 01:45:53 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:24.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.436 --rc genhtml_branch_coverage=1 00:08:24.436 --rc genhtml_function_coverage=1 00:08:24.436 --rc genhtml_legend=1 00:08:24.436 --rc geninfo_all_blocks=1 00:08:24.436 --rc geninfo_unexecuted_blocks=1 00:08:24.436 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.436 ' 00:08:24.436 01:45:53 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:24.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.436 --rc genhtml_branch_coverage=1 00:08:24.436 --rc genhtml_function_coverage=1 00:08:24.436 --rc genhtml_legend=1 00:08:24.436 --rc geninfo_all_blocks=1 00:08:24.436 --rc geninfo_unexecuted_blocks=1 00:08:24.436 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:24.436 ' 00:08:24.436 01:45:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:08:24.436 01:45:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:08:24.436 01:45:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:08:24.436 01:45:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.436 01:45:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.436 01:45:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.436 ************************************ 00:08:24.436 START TEST skip_rpc 00:08:24.436 ************************************ 00:08:24.436 01:45:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:08:24.436 01:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=4030074 00:08:24.436 01:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:24.436 01:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:08:24.436 01:45:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:08:24.436 [2024-10-09 01:45:54.005568] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:24.436 [2024-10-09 01:45:54.005646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030074 ] 00:08:24.436 [2024-10-09 01:45:54.081655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.695 [2024-10-09 01:45:54.129027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 4030074 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 4030074 ']' 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 4030074 00:08:29.964 01:45:58 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4030074 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4030074' 00:08:29.964 killing process with pid 4030074 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 4030074 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 4030074 00:08:29.964 00:08:29.964 real 0m5.409s 00:08:29.964 user 0m5.129s 00:08:29.964 sys 0m0.324s 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.964 01:45:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.964 ************************************ 00:08:29.964 END TEST skip_rpc 00:08:29.964 ************************************ 00:08:29.964 01:45:59 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:29.964 01:45:59 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.964 01:45:59 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.964 01:45:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:29.964 ************************************ 00:08:29.964 START TEST skip_rpc_with_json 00:08:29.964 ************************************ 00:08:29.964 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:08:29.964 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:29.964 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=4030849 00:08:29.964 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:29.965 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:29.965 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 4030849 00:08:29.965 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 4030849 ']' 00:08:29.965 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.965 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:29.965 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.965 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:29.965 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:29.965 [2024-10-09 01:45:59.495932] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:29.965 [2024-10-09 01:45:59.496009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4030849 ] 00:08:29.965 [2024-10-09 01:45:59.570074] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.965 [2024-10-09 01:45:59.619071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.224 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:30.224 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:08:30.224 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:30.224 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.224 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:30.225 [2024-10-09 01:45:59.842585] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:30.225 request: 00:08:30.225 { 00:08:30.225 "trtype": "tcp", 00:08:30.225 "method": "nvmf_get_transports", 00:08:30.225 "req_id": 1 00:08:30.225 } 00:08:30.225 Got JSON-RPC error response 00:08:30.225 response: 00:08:30.225 { 00:08:30.225 "code": -19, 00:08:30.225 "message": "No such device" 00:08:30.225 } 00:08:30.225 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:30.225 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:30.225 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.225 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:30.225 [2024-10-09 01:45:59.854683] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.225 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.225 01:45:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:30.225 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.225 01:45:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:30.484 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.484 01:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:08:30.484 { 00:08:30.484 "subsystems": [ 00:08:30.484 { 00:08:30.484 "subsystem": "scheduler", 00:08:30.484 "config": [ 00:08:30.484 { 00:08:30.484 "method": "framework_set_scheduler", 00:08:30.484 "params": { 00:08:30.484 "name": "static" 00:08:30.484 } 00:08:30.484 } 00:08:30.484 ] 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "subsystem": "vmd", 00:08:30.484 "config": [] 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "subsystem": "sock", 00:08:30.484 "config": [ 00:08:30.484 { 00:08:30.484 "method": "sock_set_default_impl", 00:08:30.484 "params": { 00:08:30.484 "impl_name": "posix" 00:08:30.484 } 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "method": "sock_impl_set_options", 00:08:30.484 "params": { 00:08:30.484 "impl_name": "ssl", 00:08:30.484 "recv_buf_size": 4096, 00:08:30.484 "send_buf_size": 4096, 00:08:30.484 "enable_recv_pipe": true, 00:08:30.484 "enable_quickack": false, 00:08:30.484 "enable_placement_id": 0, 00:08:30.484 "enable_zerocopy_send_server": true, 00:08:30.484 "enable_zerocopy_send_client": false, 00:08:30.484 "zerocopy_threshold": 0, 00:08:30.484 "tls_version": 0, 00:08:30.484 "enable_ktls": false 00:08:30.484 } 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "method": "sock_impl_set_options", 00:08:30.484 "params": { 00:08:30.484 "impl_name": "posix", 00:08:30.484 "recv_buf_size": 2097152, 00:08:30.484 "send_buf_size": 2097152, 00:08:30.484 "enable_recv_pipe": true, 00:08:30.484 "enable_quickack": false, 00:08:30.484 "enable_placement_id": 0, 00:08:30.484 "enable_zerocopy_send_server": true, 00:08:30.484 "enable_zerocopy_send_client": false, 00:08:30.484 "zerocopy_threshold": 0, 00:08:30.484 "tls_version": 0, 00:08:30.484 "enable_ktls": false 00:08:30.484 } 00:08:30.484 } 00:08:30.484 ] 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "subsystem": "iobuf", 00:08:30.484 "config": [ 00:08:30.484 { 00:08:30.484 "method": "iobuf_set_options", 00:08:30.484 "params": { 00:08:30.484 "small_pool_count": 8192, 00:08:30.484 "large_pool_count": 1024, 00:08:30.484 "small_bufsize": 8192, 00:08:30.484 "large_bufsize": 135168 00:08:30.484 } 00:08:30.484 } 00:08:30.484 ] 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "subsystem": "keyring", 00:08:30.484 "config": [] 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "subsystem": "vfio_user_target", 00:08:30.484 "config": null 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "subsystem": "fsdev", 00:08:30.484 "config": [ 00:08:30.484 { 00:08:30.484 "method": "fsdev_set_opts", 00:08:30.484 "params": { 00:08:30.484 "fsdev_io_pool_size": 65535, 00:08:30.484 "fsdev_io_cache_size": 256 00:08:30.484 } 00:08:30.484 } 00:08:30.484 ] 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "subsystem": "accel", 00:08:30.484 "config": [ 00:08:30.484 { 00:08:30.484 "method": "accel_set_options", 00:08:30.484 "params": { 00:08:30.484 "small_cache_size": 128, 00:08:30.484 "large_cache_size": 16, 00:08:30.484 "task_count": 2048, 00:08:30.484 "sequence_count": 2048, 00:08:30.484 "buf_count": 2048 00:08:30.484 } 00:08:30.484 } 00:08:30.484 ] 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "subsystem": "bdev", 00:08:30.484 "config": [ 00:08:30.484 { 00:08:30.484 "method": "bdev_set_options", 00:08:30.484 "params": { 00:08:30.484 "bdev_io_pool_size": 65535, 00:08:30.484 "bdev_io_cache_size": 256, 00:08:30.484 "bdev_auto_examine": true, 00:08:30.484 "iobuf_small_cache_size": 128, 00:08:30.484 "iobuf_large_cache_size": 16 00:08:30.484 } 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "method": "bdev_raid_set_options", 00:08:30.484 "params": { 00:08:30.484 "process_window_size_kb": 1024, 00:08:30.484 "process_max_bandwidth_mb_sec": 0 00:08:30.484 } 00:08:30.484 }, 00:08:30.484 { 00:08:30.484 "method": "bdev_nvme_set_options", 00:08:30.484 "params": { 00:08:30.485 "action_on_timeout": "none", 00:08:30.485 "timeout_us": 0, 00:08:30.485 "timeout_admin_us": 0, 00:08:30.485 "keep_alive_timeout_ms": 10000, 00:08:30.485 "arbitration_burst": 0, 00:08:30.485 "low_priority_weight": 0, 00:08:30.485 "medium_priority_weight": 0, 00:08:30.485 "high_priority_weight": 0, 00:08:30.485 "nvme_adminq_poll_period_us": 10000, 00:08:30.485 "nvme_ioq_poll_period_us": 0, 00:08:30.485 "io_queue_requests": 0, 00:08:30.485 "delay_cmd_submit": true, 00:08:30.485 "transport_retry_count": 4, 00:08:30.485 "bdev_retry_count": 3, 00:08:30.485 "transport_ack_timeout": 0, 00:08:30.485 "ctrlr_loss_timeout_sec": 0, 00:08:30.485 "reconnect_delay_sec": 0, 00:08:30.485 "fast_io_fail_timeout_sec": 0, 00:08:30.485 "disable_auto_failback": false, 00:08:30.485 "generate_uuids": false, 00:08:30.485 "transport_tos": 0, 00:08:30.485 "nvme_error_stat": false, 00:08:30.485 "rdma_srq_size": 0, 00:08:30.485 "io_path_stat": false, 00:08:30.485 "allow_accel_sequence": false, 00:08:30.485 "rdma_max_cq_size": 0, 00:08:30.485 "rdma_cm_event_timeout_ms": 0, 00:08:30.485 "dhchap_digests": [ 00:08:30.485 "sha256", 00:08:30.485 "sha384", 00:08:30.485 "sha512" 00:08:30.485 ], 00:08:30.485 "dhchap_dhgroups": [ 00:08:30.485 "null", 00:08:30.485 "ffdhe2048", 00:08:30.485 "ffdhe3072", 00:08:30.485 "ffdhe4096", 00:08:30.485 "ffdhe6144", 00:08:30.485 "ffdhe8192" 00:08:30.485 ] 00:08:30.485 } 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "method": "bdev_nvme_set_hotplug", 00:08:30.485 "params": { 00:08:30.485 "period_us": 100000, 00:08:30.485 "enable": false 00:08:30.485 } 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "method": "bdev_iscsi_set_options", 00:08:30.485 "params": { 00:08:30.485 "timeout_sec": 30 00:08:30.485 } 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "method": "bdev_wait_for_examine" 00:08:30.485 } 00:08:30.485 ] 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "subsystem": "nvmf", 00:08:30.485 "config": [ 00:08:30.485 { 00:08:30.485 "method": "nvmf_set_config", 00:08:30.485 "params": { 00:08:30.485 "discovery_filter": "match_any", 00:08:30.485 "admin_cmd_passthru": { 00:08:30.485 "identify_ctrlr": false 00:08:30.485 }, 00:08:30.485 "dhchap_digests": [ 00:08:30.485 "sha256", 00:08:30.485 "sha384", 00:08:30.485 "sha512" 00:08:30.485 ], 00:08:30.485 "dhchap_dhgroups": [ 00:08:30.485 "null", 00:08:30.485 "ffdhe2048", 00:08:30.485 "ffdhe3072", 00:08:30.485 "ffdhe4096", 00:08:30.485 "ffdhe6144", 00:08:30.485 "ffdhe8192" 00:08:30.485 ] 00:08:30.485 } 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "method": "nvmf_set_max_subsystems", 00:08:30.485 "params": { 00:08:30.485 "max_subsystems": 1024 00:08:30.485 } 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "method": "nvmf_set_crdt", 00:08:30.485 "params": { 00:08:30.485 "crdt1": 0, 00:08:30.485 "crdt2": 0, 00:08:30.485 "crdt3": 0 00:08:30.485 } 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "method": "nvmf_create_transport", 00:08:30.485 "params": { 00:08:30.485 "trtype": "TCP", 00:08:30.485 "max_queue_depth": 128, 00:08:30.485 "max_io_qpairs_per_ctrlr": 127, 00:08:30.485 "in_capsule_data_size": 4096, 00:08:30.485 "max_io_size": 131072, 00:08:30.485 "io_unit_size": 131072, 00:08:30.485 "max_aq_depth": 128, 00:08:30.485 "num_shared_buffers": 511, 00:08:30.485 "buf_cache_size": 4294967295, 00:08:30.485 "dif_insert_or_strip": false, 00:08:30.485 "zcopy": false, 00:08:30.485 "c2h_success": true, 00:08:30.485 "sock_priority": 0, 00:08:30.485 "abort_timeout_sec": 1, 00:08:30.485 "ack_timeout": 0, 00:08:30.485 "data_wr_pool_size": 0 00:08:30.485 } 00:08:30.485 } 00:08:30.485 ] 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "subsystem": "nbd", 00:08:30.485 "config": [] 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "subsystem": "ublk", 00:08:30.485 "config": [] 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "subsystem": "vhost_blk", 00:08:30.485 "config": [] 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "subsystem": "scsi", 00:08:30.485 "config": null 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "subsystem": "iscsi", 00:08:30.485 "config": [ 00:08:30.485 { 00:08:30.485 "method": "iscsi_set_options", 00:08:30.485 "params": { 00:08:30.485 "node_base": "iqn.2016-06.io.spdk", 00:08:30.485 "max_sessions": 128, 00:08:30.485 "max_connections_per_session": 2, 00:08:30.485 "max_queue_depth": 64, 00:08:30.485 "default_time2wait": 2, 00:08:30.485 "default_time2retain": 20, 00:08:30.485 "first_burst_length": 8192, 00:08:30.485 "immediate_data": true, 00:08:30.485 "allow_duplicated_isid": false, 00:08:30.485 "error_recovery_level": 0, 00:08:30.485 "nop_timeout": 60, 00:08:30.485 "nop_in_interval": 30, 00:08:30.485 "disable_chap": false, 00:08:30.485 "require_chap": false, 00:08:30.485 "mutual_chap": false, 00:08:30.485 "chap_group": 0, 00:08:30.485 "max_large_datain_per_connection": 64, 00:08:30.485 "max_r2t_per_connection": 4, 00:08:30.485 "pdu_pool_size": 36864, 00:08:30.485 "immediate_data_pool_size": 16384, 00:08:30.485 "data_out_pool_size": 2048 00:08:30.485 } 00:08:30.485 } 00:08:30.485 ] 00:08:30.485 }, 00:08:30.485 { 00:08:30.485 "subsystem": "vhost_scsi", 00:08:30.485 "config": [] 00:08:30.485 } 00:08:30.485 ] 00:08:30.485 } 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 4030849 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4030849 ']' 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4030849 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4030849 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4030849' 00:08:30.485 killing process with pid 4030849 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4030849 00:08:30.485 01:46:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4030849 00:08:31.138 01:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=4030973 00:08:31.138 01:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:31.138 01:46:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 4030973 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 4030973 ']' 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 4030973 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4030973 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4030973' 00:08:36.432 killing process with pid 4030973 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 4030973 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 4030973 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/log.txt 00:08:36.432 00:08:36.432 real 0m6.352s 00:08:36.432 user 0m5.998s 00:08:36.432 sys 0m0.684s 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 ************************************ 00:08:36.432 END TEST skip_rpc_with_json 00:08:36.432 ************************************ 00:08:36.432 01:46:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:36.432 01:46:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.432 01:46:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.432 01:46:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 ************************************ 00:08:36.432 START TEST skip_rpc_with_delay 00:08:36.432 ************************************ 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:36.432 [2024-10-09 01:46:05.921877] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:36.432 [2024-10-09 01:46:05.921985] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.432 00:08:36.432 real 0m0.034s 00:08:36.432 user 0m0.014s 00:08:36.432 sys 0m0.020s 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.432 01:46:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 ************************************ 00:08:36.432 END TEST skip_rpc_with_delay 00:08:36.432 ************************************ 00:08:36.432 01:46:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:36.432 01:46:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:36.432 01:46:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:36.432 01:46:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.432 01:46:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.432 01:46:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 ************************************ 00:08:36.432 START TEST exit_on_failed_rpc_init 00:08:36.432 ************************************ 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=4031740 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 4031740 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 4031740 ']' 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:36.432 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:36.432 [2024-10-09 01:46:06.029010] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:36.432 [2024-10-09 01:46:06.029066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031740 ] 00:08:36.691 [2024-10-09 01:46:06.104783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.691 [2024-10-09 01:46:06.154621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:36.950 [2024-10-09 01:46:06.423514] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:36.950 [2024-10-09 01:46:06.423581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4031906 ] 00:08:36.950 [2024-10-09 01:46:06.496590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.950 [2024-10-09 01:46:06.542905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.950 [2024-10-09 01:46:06.542997] rpc.c: 181:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:36.950 [2024-10-09 01:46:06.543011] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:36.950 [2024-10-09 01:46:06.543019] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 4031740 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 4031740 ']' 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 4031740 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:36.950 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4031740 00:08:37.210 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:37.210 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:37.210 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4031740' 00:08:37.210 killing process with pid 4031740 00:08:37.210 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 4031740 00:08:37.210 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 4031740 00:08:37.469 00:08:37.469 real 0m0.952s 00:08:37.469 user 0m0.960s 00:08:37.469 sys 0m0.426s 00:08:37.470 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.470 01:46:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:37.470 ************************************ 00:08:37.470 END TEST exit_on_failed_rpc_init 00:08:37.470 ************************************ 00:08:37.470 01:46:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc/config.json 00:08:37.470 00:08:37.470 real 0m13.253s 00:08:37.470 user 0m12.316s 00:08:37.470 sys 0m1.786s 00:08:37.470 01:46:07 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.470 01:46:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:37.470 ************************************ 00:08:37.470 END TEST skip_rpc 00:08:37.470 ************************************ 00:08:37.470 01:46:07 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:37.470 01:46:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.470 01:46:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.470 01:46:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.470 ************************************ 00:08:37.470 START TEST rpc_client 00:08:37.470 ************************************ 00:08:37.470 01:46:07 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:37.729 * Looking for test storage... 00:08:37.729 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.729 01:46:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:37.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.729 --rc genhtml_branch_coverage=1 00:08:37.729 --rc genhtml_function_coverage=1 00:08:37.729 --rc genhtml_legend=1 00:08:37.729 --rc geninfo_all_blocks=1 00:08:37.729 --rc geninfo_unexecuted_blocks=1 00:08:37.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:37.729 ' 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:37.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.729 --rc genhtml_branch_coverage=1 00:08:37.729 --rc genhtml_function_coverage=1 00:08:37.729 --rc genhtml_legend=1 00:08:37.729 --rc geninfo_all_blocks=1 00:08:37.729 --rc geninfo_unexecuted_blocks=1 00:08:37.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:37.729 ' 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:37.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.729 --rc genhtml_branch_coverage=1 00:08:37.729 --rc genhtml_function_coverage=1 00:08:37.729 --rc genhtml_legend=1 00:08:37.729 --rc geninfo_all_blocks=1 00:08:37.729 --rc geninfo_unexecuted_blocks=1 00:08:37.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:37.729 ' 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:37.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.729 --rc genhtml_branch_coverage=1 00:08:37.729 --rc genhtml_function_coverage=1 00:08:37.729 --rc genhtml_legend=1 00:08:37.729 --rc geninfo_all_blocks=1 00:08:37.729 --rc geninfo_unexecuted_blocks=1 00:08:37.729 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:37.729 ' 00:08:37.729 01:46:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:37.729 OK 00:08:37.729 01:46:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:37.729 00:08:37.729 real 0m0.211s 00:08:37.729 user 0m0.115s 00:08:37.729 sys 0m0.110s 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.729 01:46:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:37.729 ************************************ 00:08:37.729 END TEST rpc_client 00:08:37.730 ************************************ 00:08:37.730 01:46:07 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:08:37.730 01:46:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.730 01:46:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.730 01:46:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.730 ************************************ 00:08:37.730 START TEST json_config 00:08:37.730 ************************************ 00:08:37.730 01:46:07 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config.sh 00:08:37.989 01:46:07 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:37.989 01:46:07 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:08:37.989 01:46:07 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:37.989 01:46:07 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:37.989 01:46:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.989 01:46:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.989 01:46:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.989 01:46:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.989 01:46:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.989 01:46:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.989 01:46:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.989 01:46:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.989 01:46:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.989 01:46:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.989 01:46:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.989 01:46:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:37.989 01:46:07 json_config -- scripts/common.sh@345 -- # : 1 00:08:37.989 01:46:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.989 01:46:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.989 01:46:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:37.989 01:46:07 json_config -- scripts/common.sh@353 -- # local d=1 00:08:37.989 01:46:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.989 01:46:07 json_config -- scripts/common.sh@355 -- # echo 1 00:08:37.989 01:46:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.989 01:46:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:37.990 01:46:07 json_config -- scripts/common.sh@353 -- # local d=2 00:08:37.990 01:46:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.990 01:46:07 json_config -- scripts/common.sh@355 -- # echo 2 00:08:37.990 01:46:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.990 01:46:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.990 01:46:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.990 01:46:07 json_config -- scripts/common.sh@368 -- # return 0 00:08:37.990 01:46:07 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.990 01:46:07 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:37.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.990 --rc genhtml_branch_coverage=1 00:08:37.990 --rc genhtml_function_coverage=1 00:08:37.990 --rc genhtml_legend=1 00:08:37.990 --rc geninfo_all_blocks=1 00:08:37.990 --rc geninfo_unexecuted_blocks=1 00:08:37.990 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:37.990 ' 00:08:37.990 01:46:07 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:37.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.990 --rc genhtml_branch_coverage=1 00:08:37.990 --rc genhtml_function_coverage=1 00:08:37.990 --rc genhtml_legend=1 00:08:37.990 --rc geninfo_all_blocks=1 00:08:37.990 --rc geninfo_unexecuted_blocks=1 00:08:37.990 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:37.990 ' 00:08:37.990 01:46:07 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:37.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.990 --rc genhtml_branch_coverage=1 00:08:37.990 --rc genhtml_function_coverage=1 00:08:37.990 --rc genhtml_legend=1 00:08:37.990 --rc geninfo_all_blocks=1 00:08:37.990 --rc geninfo_unexecuted_blocks=1 00:08:37.990 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:37.990 ' 00:08:37.990 01:46:07 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:37.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.990 --rc genhtml_branch_coverage=1 00:08:37.990 --rc genhtml_function_coverage=1 00:08:37.990 --rc genhtml_legend=1 00:08:37.990 --rc geninfo_all_blocks=1 00:08:37.990 --rc geninfo_unexecuted_blocks=1 00:08:37.990 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:37.990 ' 00:08:37.990 01:46:07 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:37.990 01:46:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:37.990 01:46:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.990 01:46:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.990 01:46:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.990 01:46:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.990 01:46:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.990 01:46:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.990 01:46:07 json_config -- paths/export.sh@5 -- # export PATH 00:08:37.990 01:46:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@51 -- # : 0 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:37.990 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:37.990 01:46:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:37.990 01:46:07 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:08:37.990 01:46:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:37.990 01:46:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:37.990 01:46:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:37.990 01:46:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:37.990 01:46:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:37.990 WARNING: No tests are enabled so not running JSON configuration tests 00:08:37.990 01:46:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:37.990 00:08:37.990 real 0m0.182s 00:08:37.990 user 0m0.100s 00:08:37.990 sys 0m0.090s 00:08:37.990 01:46:07 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.990 01:46:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:37.990 ************************************ 00:08:37.990 END TEST json_config 00:08:37.990 ************************************ 00:08:37.990 01:46:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:37.990 01:46:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.990 01:46:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.990 01:46:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.990 ************************************ 00:08:37.990 START TEST json_config_extra_key 00:08:37.990 ************************************ 00:08:37.990 01:46:07 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.251 --rc genhtml_branch_coverage=1 00:08:38.251 --rc genhtml_function_coverage=1 00:08:38.251 --rc genhtml_legend=1 00:08:38.251 --rc geninfo_all_blocks=1 00:08:38.251 --rc geninfo_unexecuted_blocks=1 00:08:38.251 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:38.251 ' 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.251 --rc genhtml_branch_coverage=1 00:08:38.251 --rc genhtml_function_coverage=1 00:08:38.251 --rc genhtml_legend=1 00:08:38.251 --rc geninfo_all_blocks=1 00:08:38.251 --rc geninfo_unexecuted_blocks=1 00:08:38.251 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:38.251 ' 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.251 --rc genhtml_branch_coverage=1 00:08:38.251 --rc genhtml_function_coverage=1 00:08:38.251 --rc genhtml_legend=1 00:08:38.251 --rc geninfo_all_blocks=1 00:08:38.251 --rc geninfo_unexecuted_blocks=1 00:08:38.251 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:38.251 ' 00:08:38.251 01:46:07 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.251 --rc genhtml_branch_coverage=1 00:08:38.251 --rc genhtml_function_coverage=1 00:08:38.251 --rc genhtml_legend=1 00:08:38.251 --rc geninfo_all_blocks=1 00:08:38.251 --rc geninfo_unexecuted_blocks=1 00:08:38.251 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:38.251 ' 00:08:38.251 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8023d868-666a-e711-906e-0017a4403562 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8023d868-666a-e711-906e-0017a4403562 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.251 01:46:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.251 01:46:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.251 01:46:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.251 01:46:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.251 01:46:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:38.251 01:46:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.251 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.251 01:46:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.252 01:46:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/common.sh 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:38.252 INFO: launching applications... 00:08:38.252 01:46:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=4032253 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:38.252 Waiting for target to run... 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 4032253 /var/tmp/spdk_tgt.sock 00:08:38.252 01:46:07 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 4032253 ']' 00:08:38.252 01:46:07 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:38.252 01:46:07 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/extra_key.json 00:08:38.252 01:46:07 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.252 01:46:07 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:38.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:38.252 01:46:07 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.252 01:46:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:38.252 [2024-10-09 01:46:07.834509] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:38.252 [2024-10-09 01:46:07.834587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032253 ] 00:08:38.829 [2024-10-09 01:46:08.329070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.829 [2024-10-09 01:46:08.389084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.094 01:46:08 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.094 01:46:08 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:39.094 00:08:39.094 01:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:39.094 INFO: shutting down applications... 00:08:39.094 01:46:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 4032253 ]] 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 4032253 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4032253 00:08:39.094 01:46:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:39.662 01:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:39.662 01:46:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:39.662 01:46:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 4032253 00:08:39.662 01:46:09 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:39.662 01:46:09 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:39.662 01:46:09 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:39.662 01:46:09 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:39.662 SPDK target shutdown done 00:08:39.662 01:46:09 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:39.662 Success 00:08:39.662 00:08:39.662 real 0m1.581s 00:08:39.662 user 0m1.176s 00:08:39.662 sys 0m0.619s 00:08:39.662 01:46:09 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.662 01:46:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:39.662 ************************************ 00:08:39.662 END TEST json_config_extra_key 00:08:39.662 ************************************ 00:08:39.662 01:46:09 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:39.662 01:46:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.662 01:46:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.662 01:46:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.662 ************************************ 00:08:39.662 START TEST alias_rpc 00:08:39.662 ************************************ 00:08:39.662 01:46:09 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:39.921 * Looking for test storage... 00:08:39.921 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/alias_rpc 00:08:39.921 01:46:09 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:39.921 01:46:09 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:39.921 01:46:09 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.922 01:46:09 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.922 --rc genhtml_branch_coverage=1 00:08:39.922 --rc genhtml_function_coverage=1 00:08:39.922 --rc genhtml_legend=1 00:08:39.922 --rc geninfo_all_blocks=1 00:08:39.922 --rc geninfo_unexecuted_blocks=1 00:08:39.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.922 ' 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.922 --rc genhtml_branch_coverage=1 00:08:39.922 --rc genhtml_function_coverage=1 00:08:39.922 --rc genhtml_legend=1 00:08:39.922 --rc geninfo_all_blocks=1 00:08:39.922 --rc geninfo_unexecuted_blocks=1 00:08:39.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.922 ' 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.922 --rc genhtml_branch_coverage=1 00:08:39.922 --rc genhtml_function_coverage=1 00:08:39.922 --rc genhtml_legend=1 00:08:39.922 --rc geninfo_all_blocks=1 00:08:39.922 --rc geninfo_unexecuted_blocks=1 00:08:39.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.922 ' 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:39.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.922 --rc genhtml_branch_coverage=1 00:08:39.922 --rc genhtml_function_coverage=1 00:08:39.922 --rc genhtml_legend=1 00:08:39.922 --rc geninfo_all_blocks=1 00:08:39.922 --rc geninfo_unexecuted_blocks=1 00:08:39.922 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:39.922 ' 00:08:39.922 01:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:39.922 01:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=4032488 00:08:39.922 01:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:39.922 01:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 4032488 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 4032488 ']' 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:39.922 01:46:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 [2024-10-09 01:46:09.515835] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:39.922 [2024-10-09 01:46:09.515928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032488 ] 00:08:40.181 [2024-10-09 01:46:09.590106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.181 [2024-10-09 01:46:09.639911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.440 01:46:09 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.440 01:46:09 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:40.440 01:46:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:40.440 01:46:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 4032488 00:08:40.440 01:46:10 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 4032488 ']' 00:08:40.440 01:46:10 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 4032488 00:08:40.440 01:46:10 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:40.440 01:46:10 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.440 01:46:10 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4032488 00:08:40.699 01:46:10 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.699 01:46:10 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.699 01:46:10 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4032488' 00:08:40.699 killing process with pid 4032488 00:08:40.699 01:46:10 alias_rpc -- common/autotest_common.sh@969 -- # kill 4032488 00:08:40.699 01:46:10 alias_rpc -- common/autotest_common.sh@974 -- # wait 4032488 00:08:40.959 00:08:40.959 real 0m1.183s 00:08:40.959 user 0m1.144s 00:08:40.959 sys 0m0.483s 00:08:40.959 01:46:10 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.959 01:46:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:40.959 ************************************ 00:08:40.959 END TEST alias_rpc 00:08:40.959 ************************************ 00:08:40.959 01:46:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:40.959 01:46:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:40.959 01:46:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.959 01:46:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.959 01:46:10 -- common/autotest_common.sh@10 -- # set +x 00:08:40.959 ************************************ 00:08:40.959 START TEST spdkcli_tcp 00:08:40.959 ************************************ 00:08:40.959 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:41.218 * Looking for test storage... 00:08:41.218 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.218 01:46:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:41.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.218 --rc genhtml_branch_coverage=1 00:08:41.218 --rc genhtml_function_coverage=1 00:08:41.218 --rc genhtml_legend=1 00:08:41.218 --rc geninfo_all_blocks=1 00:08:41.218 --rc geninfo_unexecuted_blocks=1 00:08:41.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:41.218 ' 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:41.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.218 --rc genhtml_branch_coverage=1 00:08:41.218 --rc genhtml_function_coverage=1 00:08:41.218 --rc genhtml_legend=1 00:08:41.218 --rc geninfo_all_blocks=1 00:08:41.218 --rc geninfo_unexecuted_blocks=1 00:08:41.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:41.218 ' 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:41.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.218 --rc genhtml_branch_coverage=1 00:08:41.218 --rc genhtml_function_coverage=1 00:08:41.218 --rc genhtml_legend=1 00:08:41.218 --rc geninfo_all_blocks=1 00:08:41.218 --rc geninfo_unexecuted_blocks=1 00:08:41.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:41.218 ' 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:41.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.218 --rc genhtml_branch_coverage=1 00:08:41.218 --rc genhtml_function_coverage=1 00:08:41.218 --rc genhtml_legend=1 00:08:41.218 --rc geninfo_all_blocks=1 00:08:41.218 --rc geninfo_unexecuted_blocks=1 00:08:41.218 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:41.218 ' 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/common.sh 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/json_config/clear_config.py 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=4032726 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:41.218 01:46:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 4032726 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 4032726 ']' 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.218 01:46:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.218 [2024-10-09 01:46:10.768679] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:41.218 [2024-10-09 01:46:10.768760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032726 ] 00:08:41.218 [2024-10-09 01:46:10.841019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:41.477 [2024-10-09 01:46:10.887175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.477 [2024-10-09 01:46:10.887178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.477 01:46:11 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.477 01:46:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:41.477 01:46:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=4032736 00:08:41.477 01:46:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:41.477 01:46:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:41.736 [ 00:08:41.736 "spdk_get_version", 00:08:41.736 "rpc_get_methods", 00:08:41.736 "notify_get_notifications", 00:08:41.736 "notify_get_types", 00:08:41.736 "trace_get_info", 00:08:41.736 "trace_get_tpoint_group_mask", 00:08:41.736 "trace_disable_tpoint_group", 00:08:41.736 "trace_enable_tpoint_group", 00:08:41.736 "trace_clear_tpoint_mask", 00:08:41.736 "trace_set_tpoint_mask", 00:08:41.736 "fsdev_set_opts", 00:08:41.736 "fsdev_get_opts", 00:08:41.736 "framework_get_pci_devices", 00:08:41.736 "framework_get_config", 00:08:41.736 "framework_get_subsystems", 00:08:41.736 "vfu_tgt_set_base_path", 00:08:41.736 "keyring_get_keys", 00:08:41.736 "iobuf_get_stats", 00:08:41.736 "iobuf_set_options", 00:08:41.736 "sock_get_default_impl", 00:08:41.736 "sock_set_default_impl", 00:08:41.736 "sock_impl_set_options", 00:08:41.736 "sock_impl_get_options", 00:08:41.736 "vmd_rescan", 00:08:41.736 "vmd_remove_device", 00:08:41.736 "vmd_enable", 00:08:41.736 "accel_get_stats", 00:08:41.736 "accel_set_options", 00:08:41.736 "accel_set_driver", 00:08:41.736 "accel_crypto_key_destroy", 00:08:41.736 "accel_crypto_keys_get", 00:08:41.736 "accel_crypto_key_create", 00:08:41.736 "accel_assign_opc", 00:08:41.736 "accel_get_module_info", 00:08:41.736 "accel_get_opc_assignments", 00:08:41.736 "bdev_get_histogram", 00:08:41.736 "bdev_enable_histogram", 00:08:41.736 "bdev_set_qos_limit", 00:08:41.736 "bdev_set_qd_sampling_period", 00:08:41.736 "bdev_get_bdevs", 00:08:41.736 "bdev_reset_iostat", 00:08:41.736 "bdev_get_iostat", 00:08:41.736 "bdev_examine", 00:08:41.736 "bdev_wait_for_examine", 00:08:41.736 "bdev_set_options", 00:08:41.736 "scsi_get_devices", 00:08:41.736 "thread_set_cpumask", 00:08:41.736 "scheduler_set_options", 00:08:41.736 "framework_get_governor", 00:08:41.736 "framework_get_scheduler", 00:08:41.736 "framework_set_scheduler", 00:08:41.736 "framework_get_reactors", 00:08:41.736 "thread_get_io_channels", 00:08:41.736 "thread_get_pollers", 00:08:41.736 "thread_get_stats", 00:08:41.736 "framework_monitor_context_switch", 00:08:41.736 "spdk_kill_instance", 00:08:41.736 "log_enable_timestamps", 00:08:41.736 "log_get_flags", 00:08:41.736 "log_clear_flag", 00:08:41.736 "log_set_flag", 00:08:41.736 "log_get_level", 00:08:41.736 "log_set_level", 00:08:41.736 "log_get_print_level", 00:08:41.736 "log_set_print_level", 00:08:41.736 "framework_enable_cpumask_locks", 00:08:41.736 "framework_disable_cpumask_locks", 00:08:41.736 "framework_wait_init", 00:08:41.736 "framework_start_init", 00:08:41.736 "virtio_blk_create_transport", 00:08:41.736 "virtio_blk_get_transports", 00:08:41.737 "vhost_controller_set_coalescing", 00:08:41.737 "vhost_get_controllers", 00:08:41.737 "vhost_delete_controller", 00:08:41.737 "vhost_create_blk_controller", 00:08:41.737 "vhost_scsi_controller_remove_target", 00:08:41.737 "vhost_scsi_controller_add_target", 00:08:41.737 "vhost_start_scsi_controller", 00:08:41.737 "vhost_create_scsi_controller", 00:08:41.737 "ublk_recover_disk", 00:08:41.737 "ublk_get_disks", 00:08:41.737 "ublk_stop_disk", 00:08:41.737 "ublk_start_disk", 00:08:41.737 "ublk_destroy_target", 00:08:41.737 "ublk_create_target", 00:08:41.737 "nbd_get_disks", 00:08:41.737 "nbd_stop_disk", 00:08:41.737 "nbd_start_disk", 00:08:41.737 "env_dpdk_get_mem_stats", 00:08:41.737 "nvmf_stop_mdns_prr", 00:08:41.737 "nvmf_publish_mdns_prr", 00:08:41.737 "nvmf_subsystem_get_listeners", 00:08:41.737 "nvmf_subsystem_get_qpairs", 00:08:41.737 "nvmf_subsystem_get_controllers", 00:08:41.737 "nvmf_get_stats", 00:08:41.737 "nvmf_get_transports", 00:08:41.737 "nvmf_create_transport", 00:08:41.737 "nvmf_get_targets", 00:08:41.737 "nvmf_delete_target", 00:08:41.737 "nvmf_create_target", 00:08:41.737 "nvmf_subsystem_allow_any_host", 00:08:41.737 "nvmf_subsystem_set_keys", 00:08:41.737 "nvmf_subsystem_remove_host", 00:08:41.737 "nvmf_subsystem_add_host", 00:08:41.737 "nvmf_ns_remove_host", 00:08:41.737 "nvmf_ns_add_host", 00:08:41.737 "nvmf_subsystem_remove_ns", 00:08:41.737 "nvmf_subsystem_set_ns_ana_group", 00:08:41.737 "nvmf_subsystem_add_ns", 00:08:41.737 "nvmf_subsystem_listener_set_ana_state", 00:08:41.737 "nvmf_discovery_get_referrals", 00:08:41.737 "nvmf_discovery_remove_referral", 00:08:41.737 "nvmf_discovery_add_referral", 00:08:41.737 "nvmf_subsystem_remove_listener", 00:08:41.737 "nvmf_subsystem_add_listener", 00:08:41.737 "nvmf_delete_subsystem", 00:08:41.737 "nvmf_create_subsystem", 00:08:41.737 "nvmf_get_subsystems", 00:08:41.737 "nvmf_set_crdt", 00:08:41.737 "nvmf_set_config", 00:08:41.737 "nvmf_set_max_subsystems", 00:08:41.737 "iscsi_get_histogram", 00:08:41.737 "iscsi_enable_histogram", 00:08:41.737 "iscsi_set_options", 00:08:41.737 "iscsi_get_auth_groups", 00:08:41.737 "iscsi_auth_group_remove_secret", 00:08:41.737 "iscsi_auth_group_add_secret", 00:08:41.737 "iscsi_delete_auth_group", 00:08:41.737 "iscsi_create_auth_group", 00:08:41.737 "iscsi_set_discovery_auth", 00:08:41.737 "iscsi_get_options", 00:08:41.737 "iscsi_target_node_request_logout", 00:08:41.737 "iscsi_target_node_set_redirect", 00:08:41.737 "iscsi_target_node_set_auth", 00:08:41.737 "iscsi_target_node_add_lun", 00:08:41.737 "iscsi_get_stats", 00:08:41.737 "iscsi_get_connections", 00:08:41.737 "iscsi_portal_group_set_auth", 00:08:41.737 "iscsi_start_portal_group", 00:08:41.737 "iscsi_delete_portal_group", 00:08:41.737 "iscsi_create_portal_group", 00:08:41.737 "iscsi_get_portal_groups", 00:08:41.737 "iscsi_delete_target_node", 00:08:41.737 "iscsi_target_node_remove_pg_ig_maps", 00:08:41.737 "iscsi_target_node_add_pg_ig_maps", 00:08:41.737 "iscsi_create_target_node", 00:08:41.737 "iscsi_get_target_nodes", 00:08:41.737 "iscsi_delete_initiator_group", 00:08:41.737 "iscsi_initiator_group_remove_initiators", 00:08:41.737 "iscsi_initiator_group_add_initiators", 00:08:41.737 "iscsi_create_initiator_group", 00:08:41.737 "iscsi_get_initiator_groups", 00:08:41.737 "fsdev_aio_delete", 00:08:41.737 "fsdev_aio_create", 00:08:41.737 "keyring_linux_set_options", 00:08:41.737 "keyring_file_remove_key", 00:08:41.737 "keyring_file_add_key", 00:08:41.737 "vfu_virtio_create_fs_endpoint", 00:08:41.737 "vfu_virtio_create_scsi_endpoint", 00:08:41.737 "vfu_virtio_scsi_remove_target", 00:08:41.737 "vfu_virtio_scsi_add_target", 00:08:41.737 "vfu_virtio_create_blk_endpoint", 00:08:41.737 "vfu_virtio_delete_endpoint", 00:08:41.737 "iaa_scan_accel_module", 00:08:41.737 "dsa_scan_accel_module", 00:08:41.737 "ioat_scan_accel_module", 00:08:41.737 "accel_error_inject_error", 00:08:41.737 "bdev_iscsi_delete", 00:08:41.737 "bdev_iscsi_create", 00:08:41.737 "bdev_iscsi_set_options", 00:08:41.737 "bdev_virtio_attach_controller", 00:08:41.737 "bdev_virtio_scsi_get_devices", 00:08:41.737 "bdev_virtio_detach_controller", 00:08:41.737 "bdev_virtio_blk_set_hotplug", 00:08:41.737 "bdev_ftl_set_property", 00:08:41.737 "bdev_ftl_get_properties", 00:08:41.737 "bdev_ftl_get_stats", 00:08:41.737 "bdev_ftl_unmap", 00:08:41.737 "bdev_ftl_unload", 00:08:41.737 "bdev_ftl_delete", 00:08:41.737 "bdev_ftl_load", 00:08:41.737 "bdev_ftl_create", 00:08:41.737 "bdev_aio_delete", 00:08:41.737 "bdev_aio_rescan", 00:08:41.737 "bdev_aio_create", 00:08:41.737 "blobfs_create", 00:08:41.737 "blobfs_detect", 00:08:41.737 "blobfs_set_cache_size", 00:08:41.737 "bdev_zone_block_delete", 00:08:41.737 "bdev_zone_block_create", 00:08:41.737 "bdev_delay_delete", 00:08:41.737 "bdev_delay_create", 00:08:41.737 "bdev_delay_update_latency", 00:08:41.737 "bdev_split_delete", 00:08:41.737 "bdev_split_create", 00:08:41.737 "bdev_error_inject_error", 00:08:41.737 "bdev_error_delete", 00:08:41.737 "bdev_error_create", 00:08:41.737 "bdev_raid_set_options", 00:08:41.737 "bdev_raid_remove_base_bdev", 00:08:41.737 "bdev_raid_add_base_bdev", 00:08:41.737 "bdev_raid_delete", 00:08:41.737 "bdev_raid_create", 00:08:41.737 "bdev_raid_get_bdevs", 00:08:41.737 "bdev_lvol_set_parent_bdev", 00:08:41.737 "bdev_lvol_set_parent", 00:08:41.737 "bdev_lvol_check_shallow_copy", 00:08:41.737 "bdev_lvol_start_shallow_copy", 00:08:41.737 "bdev_lvol_grow_lvstore", 00:08:41.737 "bdev_lvol_get_lvols", 00:08:41.737 "bdev_lvol_get_lvstores", 00:08:41.737 "bdev_lvol_delete", 00:08:41.737 "bdev_lvol_set_read_only", 00:08:41.737 "bdev_lvol_resize", 00:08:41.737 "bdev_lvol_decouple_parent", 00:08:41.737 "bdev_lvol_inflate", 00:08:41.737 "bdev_lvol_rename", 00:08:41.737 "bdev_lvol_clone_bdev", 00:08:41.737 "bdev_lvol_clone", 00:08:41.737 "bdev_lvol_snapshot", 00:08:41.737 "bdev_lvol_create", 00:08:41.737 "bdev_lvol_delete_lvstore", 00:08:41.737 "bdev_lvol_rename_lvstore", 00:08:41.737 "bdev_lvol_create_lvstore", 00:08:41.737 "bdev_passthru_delete", 00:08:41.737 "bdev_passthru_create", 00:08:41.737 "bdev_nvme_cuse_unregister", 00:08:41.737 "bdev_nvme_cuse_register", 00:08:41.737 "bdev_opal_new_user", 00:08:41.737 "bdev_opal_set_lock_state", 00:08:41.737 "bdev_opal_delete", 00:08:41.737 "bdev_opal_get_info", 00:08:41.737 "bdev_opal_create", 00:08:41.737 "bdev_nvme_opal_revert", 00:08:41.737 "bdev_nvme_opal_init", 00:08:41.737 "bdev_nvme_send_cmd", 00:08:41.737 "bdev_nvme_set_keys", 00:08:41.737 "bdev_nvme_get_path_iostat", 00:08:41.737 "bdev_nvme_get_mdns_discovery_info", 00:08:41.737 "bdev_nvme_stop_mdns_discovery", 00:08:41.737 "bdev_nvme_start_mdns_discovery", 00:08:41.737 "bdev_nvme_set_multipath_policy", 00:08:41.737 "bdev_nvme_set_preferred_path", 00:08:41.737 "bdev_nvme_get_io_paths", 00:08:41.737 "bdev_nvme_remove_error_injection", 00:08:41.737 "bdev_nvme_add_error_injection", 00:08:41.737 "bdev_nvme_get_discovery_info", 00:08:41.737 "bdev_nvme_stop_discovery", 00:08:41.737 "bdev_nvme_start_discovery", 00:08:41.737 "bdev_nvme_get_controller_health_info", 00:08:41.737 "bdev_nvme_disable_controller", 00:08:41.737 "bdev_nvme_enable_controller", 00:08:41.737 "bdev_nvme_reset_controller", 00:08:41.737 "bdev_nvme_get_transport_statistics", 00:08:41.737 "bdev_nvme_apply_firmware", 00:08:41.737 "bdev_nvme_detach_controller", 00:08:41.737 "bdev_nvme_get_controllers", 00:08:41.737 "bdev_nvme_attach_controller", 00:08:41.737 "bdev_nvme_set_hotplug", 00:08:41.737 "bdev_nvme_set_options", 00:08:41.737 "bdev_null_resize", 00:08:41.737 "bdev_null_delete", 00:08:41.737 "bdev_null_create", 00:08:41.737 "bdev_malloc_delete", 00:08:41.737 "bdev_malloc_create" 00:08:41.737 ] 00:08:41.737 01:46:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:41.737 01:46:11 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:41.737 01:46:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.737 01:46:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:41.737 01:46:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 4032726 00:08:41.737 01:46:11 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 4032726 ']' 00:08:41.737 01:46:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 4032726 00:08:41.737 01:46:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:41.737 01:46:11 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.737 01:46:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4032726 00:08:41.997 01:46:11 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.997 01:46:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.997 01:46:11 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4032726' 00:08:41.997 killing process with pid 4032726 00:08:41.997 01:46:11 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 4032726 00:08:41.997 01:46:11 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 4032726 00:08:42.257 00:08:42.257 real 0m1.175s 00:08:42.257 user 0m1.974s 00:08:42.257 sys 0m0.495s 00:08:42.257 01:46:11 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.257 01:46:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.257 ************************************ 00:08:42.257 END TEST spdkcli_tcp 00:08:42.257 ************************************ 00:08:42.257 01:46:11 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:42.257 01:46:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.257 01:46:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.257 01:46:11 -- common/autotest_common.sh@10 -- # set +x 00:08:42.257 ************************************ 00:08:42.257 START TEST dpdk_mem_utility 00:08:42.257 ************************************ 00:08:42.257 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:42.257 * Looking for test storage... 00:08:42.257 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/dpdk_memory_utility 00:08:42.257 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:42.257 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:08:42.257 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.516 01:46:11 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:42.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.516 --rc genhtml_branch_coverage=1 00:08:42.516 --rc genhtml_function_coverage=1 00:08:42.516 --rc genhtml_legend=1 00:08:42.516 --rc geninfo_all_blocks=1 00:08:42.516 --rc geninfo_unexecuted_blocks=1 00:08:42.516 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:42.516 ' 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:42.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.516 --rc genhtml_branch_coverage=1 00:08:42.516 --rc genhtml_function_coverage=1 00:08:42.516 --rc genhtml_legend=1 00:08:42.516 --rc geninfo_all_blocks=1 00:08:42.516 --rc geninfo_unexecuted_blocks=1 00:08:42.516 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:42.516 ' 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:42.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.516 --rc genhtml_branch_coverage=1 00:08:42.516 --rc genhtml_function_coverage=1 00:08:42.516 --rc genhtml_legend=1 00:08:42.516 --rc geninfo_all_blocks=1 00:08:42.516 --rc geninfo_unexecuted_blocks=1 00:08:42.516 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:42.516 ' 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:42.516 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.516 --rc genhtml_branch_coverage=1 00:08:42.516 --rc genhtml_function_coverage=1 00:08:42.516 --rc genhtml_legend=1 00:08:42.516 --rc geninfo_all_blocks=1 00:08:42.516 --rc geninfo_unexecuted_blocks=1 00:08:42.516 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:42.516 ' 00:08:42.516 01:46:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:42.516 01:46:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=4032981 00:08:42.516 01:46:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 4032981 00:08:42.516 01:46:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 4032981 ']' 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.516 01:46:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:42.516 [2024-10-09 01:46:11.998599] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:42.516 [2024-10-09 01:46:11.998693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4032981 ] 00:08:42.516 [2024-10-09 01:46:12.072779] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.516 [2024-10-09 01:46:12.116485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.776 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.776 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:42.776 01:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:42.776 01:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:42.776 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.776 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:42.776 { 00:08:42.776 "filename": "/tmp/spdk_mem_dump.txt" 00:08:42.776 } 00:08:42.776 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.776 01:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:42.776 DPDK memory size 810.000000 MiB in 1 heap(s) 00:08:42.776 1 heaps totaling size 810.000000 MiB 00:08:42.776 size: 810.000000 MiB heap id: 0 00:08:42.776 end heaps---------- 00:08:42.776 9 mempools totaling size 595.772034 MiB 00:08:42.776 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:42.776 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:42.776 size: 92.545471 MiB name: bdev_io_4032981 00:08:42.776 size: 50.003479 MiB name: msgpool_4032981 00:08:42.776 size: 36.509338 MiB name: fsdev_io_4032981 00:08:42.776 size: 21.763794 MiB name: PDU_Pool 00:08:42.776 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:42.776 size: 4.133484 MiB name: evtpool_4032981 00:08:42.776 size: 0.026123 MiB name: Session_Pool 00:08:42.776 end mempools------- 00:08:42.776 6 memzones totaling size 4.142822 MiB 00:08:42.776 size: 1.000366 MiB name: RG_ring_0_4032981 00:08:42.776 size: 1.000366 MiB name: RG_ring_1_4032981 00:08:42.776 size: 1.000366 MiB name: RG_ring_4_4032981 00:08:42.776 size: 1.000366 MiB name: RG_ring_5_4032981 00:08:42.776 size: 0.125366 MiB name: RG_ring_2_4032981 00:08:42.776 size: 0.015991 MiB name: RG_ring_3_4032981 00:08:42.776 end memzones------- 00:08:42.776 01:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:42.776 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:42.776 list of free elements. size: 10.862488 MiB 00:08:42.776 element at address: 0x200018a00000 with size: 0.999878 MiB 00:08:42.776 element at address: 0x200018c00000 with size: 0.999878 MiB 00:08:42.776 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:42.776 element at address: 0x200031800000 with size: 0.994446 MiB 00:08:42.776 element at address: 0x200008000000 with size: 0.959839 MiB 00:08:42.776 element at address: 0x200012c00000 with size: 0.954285 MiB 00:08:42.776 element at address: 0x200018e00000 with size: 0.936584 MiB 00:08:42.776 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:42.776 element at address: 0x20001a600000 with size: 0.582886 MiB 00:08:42.776 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:42.776 element at address: 0x200003e00000 with size: 0.490723 MiB 00:08:42.776 element at address: 0x200019000000 with size: 0.485657 MiB 00:08:42.776 element at address: 0x200010600000 with size: 0.481934 MiB 00:08:42.776 element at address: 0x200027a00000 with size: 0.410034 MiB 00:08:42.776 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:42.776 list of standard malloc elements. size: 199.218628 MiB 00:08:42.776 element at address: 0x2000081fff80 with size: 132.000122 MiB 00:08:42.776 element at address: 0x200003ffff80 with size: 64.000122 MiB 00:08:42.776 element at address: 0x200018afff80 with size: 1.000122 MiB 00:08:42.776 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:08:42.776 element at address: 0x200018efff80 with size: 1.000122 MiB 00:08:42.776 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:42.776 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:08:42.776 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:42.776 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:08:42.776 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:42.776 element at address: 0x20000085b100 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000008db3c0 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000008db5c0 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000008df880 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200003e7da00 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200003e7dac0 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200003efdd80 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000080fdd80 with size: 0.000183 MiB 00:08:42.776 element at address: 0x20001067b600 with size: 0.000183 MiB 00:08:42.776 element at address: 0x20001067b6c0 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000106fb980 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:08:42.776 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:08:42.776 element at address: 0x20001a695380 with size: 0.000183 MiB 00:08:42.776 element at address: 0x20001a695440 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200027a69040 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:08:42.776 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:08:42.776 list of memzone associated elements. size: 599.918884 MiB 00:08:42.776 element at address: 0x20001a695500 with size: 211.416748 MiB 00:08:42.776 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:42.776 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:08:42.776 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:42.776 element at address: 0x200012df4780 with size: 92.045044 MiB 00:08:42.776 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_4032981_0 00:08:42.776 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:42.777 associated memzone info: size: 48.002930 MiB name: MP_msgpool_4032981_0 00:08:42.777 element at address: 0x2000107fdb80 with size: 36.008911 MiB 00:08:42.777 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_4032981_0 00:08:42.777 element at address: 0x2000191be940 with size: 20.255554 MiB 00:08:42.777 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:42.777 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:08:42.777 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:42.777 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:42.777 associated memzone info: size: 3.000122 MiB name: MP_evtpool_4032981_0 00:08:42.777 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:42.777 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_4032981 00:08:42.777 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:42.777 associated memzone info: size: 1.007996 MiB name: MP_evtpool_4032981 00:08:42.777 element at address: 0x2000106fba40 with size: 1.008118 MiB 00:08:42.777 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:42.777 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:08:42.777 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:42.777 element at address: 0x2000080fde40 with size: 1.008118 MiB 00:08:42.777 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:42.777 element at address: 0x200003efde40 with size: 1.008118 MiB 00:08:42.777 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:42.777 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:42.777 associated memzone info: size: 1.000366 MiB name: RG_ring_0_4032981 00:08:42.777 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:42.777 associated memzone info: size: 1.000366 MiB name: RG_ring_1_4032981 00:08:42.777 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:08:42.777 associated memzone info: size: 1.000366 MiB name: RG_ring_4_4032981 00:08:42.777 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:08:42.777 associated memzone info: size: 1.000366 MiB name: RG_ring_5_4032981 00:08:42.777 element at address: 0x20000085b1c0 with size: 0.500488 MiB 00:08:42.777 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_4032981 00:08:42.777 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:42.777 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_4032981 00:08:42.777 element at address: 0x20001067b780 with size: 0.500488 MiB 00:08:42.777 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:42.777 element at address: 0x200003e7db80 with size: 0.500488 MiB 00:08:42.777 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:42.777 element at address: 0x20001907c540 with size: 0.250488 MiB 00:08:42.777 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:42.777 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:42.777 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_4032981 00:08:42.777 element at address: 0x2000008df940 with size: 0.125488 MiB 00:08:42.777 associated memzone info: size: 0.125366 MiB name: RG_ring_2_4032981 00:08:42.777 element at address: 0x2000080f5b80 with size: 0.031738 MiB 00:08:42.777 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:42.777 element at address: 0x200027a69100 with size: 0.023743 MiB 00:08:42.777 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:42.777 element at address: 0x2000008db680 with size: 0.016113 MiB 00:08:42.777 associated memzone info: size: 0.015991 MiB name: RG_ring_3_4032981 00:08:42.777 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:08:42.777 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:42.777 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:42.777 associated memzone info: size: 0.000183 MiB name: MP_msgpool_4032981 00:08:42.777 element at address: 0x2000008db480 with size: 0.000305 MiB 00:08:42.777 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_4032981 00:08:42.777 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:42.777 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_4032981 00:08:42.777 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:08:42.777 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:43.036 01:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:43.036 01:46:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 4032981 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 4032981 ']' 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 4032981 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4032981 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4032981' 00:08:43.036 killing process with pid 4032981 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 4032981 00:08:43.036 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 4032981 00:08:43.296 00:08:43.296 real 0m1.031s 00:08:43.296 user 0m0.937s 00:08:43.296 sys 0m0.430s 00:08:43.296 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.296 01:46:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:43.296 ************************************ 00:08:43.296 END TEST dpdk_mem_utility 00:08:43.296 ************************************ 00:08:43.296 01:46:12 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:08:43.296 01:46:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.296 01:46:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.296 01:46:12 -- common/autotest_common.sh@10 -- # set +x 00:08:43.296 ************************************ 00:08:43.296 START TEST event 00:08:43.296 ************************************ 00:08:43.296 01:46:12 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event.sh 00:08:43.555 * Looking for test storage... 00:08:43.555 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1681 -- # lcov --version 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:43.555 01:46:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.555 01:46:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.555 01:46:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.555 01:46:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.555 01:46:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.555 01:46:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.555 01:46:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.555 01:46:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.555 01:46:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.555 01:46:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.555 01:46:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.555 01:46:13 event -- scripts/common.sh@344 -- # case "$op" in 00:08:43.555 01:46:13 event -- scripts/common.sh@345 -- # : 1 00:08:43.555 01:46:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.555 01:46:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.555 01:46:13 event -- scripts/common.sh@365 -- # decimal 1 00:08:43.555 01:46:13 event -- scripts/common.sh@353 -- # local d=1 00:08:43.555 01:46:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.555 01:46:13 event -- scripts/common.sh@355 -- # echo 1 00:08:43.555 01:46:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.555 01:46:13 event -- scripts/common.sh@366 -- # decimal 2 00:08:43.555 01:46:13 event -- scripts/common.sh@353 -- # local d=2 00:08:43.555 01:46:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.555 01:46:13 event -- scripts/common.sh@355 -- # echo 2 00:08:43.555 01:46:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.555 01:46:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.555 01:46:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.555 01:46:13 event -- scripts/common.sh@368 -- # return 0 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:43.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.555 --rc genhtml_branch_coverage=1 00:08:43.555 --rc genhtml_function_coverage=1 00:08:43.555 --rc genhtml_legend=1 00:08:43.555 --rc geninfo_all_blocks=1 00:08:43.555 --rc geninfo_unexecuted_blocks=1 00:08:43.555 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:43.555 ' 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:43.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.555 --rc genhtml_branch_coverage=1 00:08:43.555 --rc genhtml_function_coverage=1 00:08:43.555 --rc genhtml_legend=1 00:08:43.555 --rc geninfo_all_blocks=1 00:08:43.555 --rc geninfo_unexecuted_blocks=1 00:08:43.555 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:43.555 ' 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:43.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.555 --rc genhtml_branch_coverage=1 00:08:43.555 --rc genhtml_function_coverage=1 00:08:43.555 --rc genhtml_legend=1 00:08:43.555 --rc geninfo_all_blocks=1 00:08:43.555 --rc geninfo_unexecuted_blocks=1 00:08:43.555 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:43.555 ' 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:43.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.555 --rc genhtml_branch_coverage=1 00:08:43.555 --rc genhtml_function_coverage=1 00:08:43.555 --rc genhtml_legend=1 00:08:43.555 --rc geninfo_all_blocks=1 00:08:43.555 --rc geninfo_unexecuted_blocks=1 00:08:43.555 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:43.555 ' 00:08:43.555 01:46:13 event -- event/event.sh@9 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:43.555 01:46:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:43.555 01:46:13 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:43.555 01:46:13 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.555 01:46:13 event -- common/autotest_common.sh@10 -- # set +x 00:08:43.555 ************************************ 00:08:43.555 START TEST event_perf 00:08:43.555 ************************************ 00:08:43.555 01:46:13 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:43.555 Running I/O for 1 seconds...[2024-10-09 01:46:13.158109] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:43.555 [2024-10-09 01:46:13.158208] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033219 ] 00:08:43.815 [2024-10-09 01:46:13.235966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.815 [2024-10-09 01:46:13.288839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.815 [2024-10-09 01:46:13.290830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.815 [2024-10-09 01:46:13.290853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.815 [2024-10-09 01:46:13.290855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.751 Running I/O for 1 seconds... 00:08:44.751 lcore 0: 199893 00:08:44.751 lcore 1: 199891 00:08:44.751 lcore 2: 199892 00:08:44.751 lcore 3: 199893 00:08:44.751 done. 00:08:44.751 00:08:44.751 real 0m1.197s 00:08:44.751 user 0m4.100s 00:08:44.751 sys 0m0.094s 00:08:44.751 01:46:14 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.751 01:46:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:44.751 ************************************ 00:08:44.751 END TEST event_perf 00:08:44.751 ************************************ 00:08:44.751 01:46:14 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:44.751 01:46:14 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:44.751 01:46:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.751 01:46:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:44.751 ************************************ 00:08:44.751 START TEST event_reactor 00:08:44.751 ************************************ 00:08:44.751 01:46:14 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:45.010 [2024-10-09 01:46:14.430844] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:45.010 [2024-10-09 01:46:14.430942] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033414 ] 00:08:45.010 [2024-10-09 01:46:14.505370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.010 [2024-10-09 01:46:14.550877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.947 test_start 00:08:45.947 oneshot 00:08:45.947 tick 100 00:08:45.947 tick 100 00:08:45.947 tick 250 00:08:45.947 tick 100 00:08:45.947 tick 100 00:08:45.947 tick 100 00:08:45.947 tick 250 00:08:45.947 tick 500 00:08:45.947 tick 100 00:08:45.947 tick 100 00:08:45.947 tick 250 00:08:45.947 tick 100 00:08:45.947 tick 100 00:08:45.947 test_end 00:08:45.947 00:08:45.947 real 0m1.179s 00:08:45.947 user 0m1.095s 00:08:45.947 sys 0m0.080s 00:08:45.947 01:46:15 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.947 01:46:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:45.947 ************************************ 00:08:45.947 END TEST event_reactor 00:08:45.947 ************************************ 00:08:46.207 01:46:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:46.207 01:46:15 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:46.207 01:46:15 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.207 01:46:15 event -- common/autotest_common.sh@10 -- # set +x 00:08:46.207 ************************************ 00:08:46.207 START TEST event_reactor_perf 00:08:46.207 ************************************ 00:08:46.207 01:46:15 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:46.207 [2024-10-09 01:46:15.672046] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:46.207 [2024-10-09 01:46:15.672144] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033558 ] 00:08:46.207 [2024-10-09 01:46:15.748436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.207 [2024-10-09 01:46:15.794633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.584 test_start 00:08:47.584 test_end 00:08:47.584 Performance: 967248 events per second 00:08:47.584 00:08:47.584 real 0m1.185s 00:08:47.584 user 0m1.093s 00:08:47.584 sys 0m0.088s 00:08:47.584 01:46:16 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.584 01:46:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:47.584 ************************************ 00:08:47.584 END TEST event_reactor_perf 00:08:47.584 ************************************ 00:08:47.584 01:46:16 event -- event/event.sh@49 -- # uname -s 00:08:47.584 01:46:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:47.584 01:46:16 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:47.584 01:46:16 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.584 01:46:16 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.584 01:46:16 event -- common/autotest_common.sh@10 -- # set +x 00:08:47.584 ************************************ 00:08:47.584 START TEST event_scheduler 00:08:47.584 ************************************ 00:08:47.584 01:46:16 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:47.584 * Looking for test storage... 00:08:47.584 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.584 01:46:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:47.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.584 --rc genhtml_branch_coverage=1 00:08:47.584 --rc genhtml_function_coverage=1 00:08:47.584 --rc genhtml_legend=1 00:08:47.584 --rc geninfo_all_blocks=1 00:08:47.584 --rc geninfo_unexecuted_blocks=1 00:08:47.584 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:47.584 ' 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:47.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.584 --rc genhtml_branch_coverage=1 00:08:47.584 --rc genhtml_function_coverage=1 00:08:47.584 --rc genhtml_legend=1 00:08:47.584 --rc geninfo_all_blocks=1 00:08:47.584 --rc geninfo_unexecuted_blocks=1 00:08:47.584 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:47.584 ' 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:47.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.584 --rc genhtml_branch_coverage=1 00:08:47.584 --rc genhtml_function_coverage=1 00:08:47.584 --rc genhtml_legend=1 00:08:47.584 --rc geninfo_all_blocks=1 00:08:47.584 --rc geninfo_unexecuted_blocks=1 00:08:47.584 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:47.584 ' 00:08:47.584 01:46:17 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:47.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.585 --rc genhtml_branch_coverage=1 00:08:47.585 --rc genhtml_function_coverage=1 00:08:47.585 --rc genhtml_legend=1 00:08:47.585 --rc geninfo_all_blocks=1 00:08:47.585 --rc geninfo_unexecuted_blocks=1 00:08:47.585 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:08:47.585 ' 00:08:47.585 01:46:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:47.585 01:46:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:47.585 01:46:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=4033838 00:08:47.585 01:46:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:47.585 01:46:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 4033838 00:08:47.585 01:46:17 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 4033838 ']' 00:08:47.585 01:46:17 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.585 01:46:17 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.585 01:46:17 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.585 01:46:17 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.585 01:46:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:47.585 [2024-10-09 01:46:17.118378] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:47.585 [2024-10-09 01:46:17.118433] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4033838 ] 00:08:47.585 [2024-10-09 01:46:17.186595] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:47.585 [2024-10-09 01:46:17.240468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.585 [2024-10-09 01:46:17.240492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.585 [2024-10-09 01:46:17.240551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.585 [2024-10-09 01:46:17.240553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:47.844 01:46:17 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.844 01:46:17 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:47.844 01:46:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 [2024-10-09 01:46:17.313297] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:47.845 [2024-10-09 01:46:17.313320] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:47.845 [2024-10-09 01:46:17.313332] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:47.845 [2024-10-09 01:46:17.313340] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:47.845 [2024-10-09 01:46:17.313347] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 [2024-10-09 01:46:17.395876] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 ************************************ 00:08:47.845 START TEST scheduler_create_thread 00:08:47.845 ************************************ 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 2 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 3 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 4 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 5 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 6 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 7 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:47.845 8 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.845 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:48.104 9 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:48.104 10 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.104 01:46:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:49.041 01:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.041 01:46:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:49.041 01:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.041 01:46:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:50.419 01:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.419 01:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:50.419 01:46:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:50.419 01:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.419 01:46:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:51.358 01:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.358 00:08:51.358 real 0m3.381s 00:08:51.358 user 0m0.022s 00:08:51.358 sys 0m0.010s 00:08:51.358 01:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.358 01:46:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:51.358 ************************************ 00:08:51.358 END TEST scheduler_create_thread 00:08:51.358 ************************************ 00:08:51.358 01:46:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:51.358 01:46:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 4033838 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 4033838 ']' 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 4033838 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4033838 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4033838' 00:08:51.358 killing process with pid 4033838 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 4033838 00:08:51.358 01:46:20 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 4033838 00:08:51.617 [2024-10-09 01:46:21.200250] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:51.877 00:08:51.877 real 0m4.484s 00:08:51.877 user 0m7.862s 00:08:51.877 sys 0m0.441s 00:08:51.877 01:46:21 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.877 01:46:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:51.877 ************************************ 00:08:51.877 END TEST event_scheduler 00:08:51.877 ************************************ 00:08:51.877 01:46:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:51.877 01:46:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:51.877 01:46:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.877 01:46:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.877 01:46:21 event -- common/autotest_common.sh@10 -- # set +x 00:08:51.877 ************************************ 00:08:51.877 START TEST app_repeat 00:08:51.877 ************************************ 00:08:51.877 01:46:21 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=4034423 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 4034423' 00:08:51.877 Process app_repeat pid: 4034423 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:51.877 spdk_app_start Round 0 00:08:51.877 01:46:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4034423 /var/tmp/spdk-nbd.sock 00:08:51.877 01:46:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4034423 ']' 00:08:51.877 01:46:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:51.877 01:46:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:51.877 01:46:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:51.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:51.877 01:46:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:51.877 01:46:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:51.877 [2024-10-09 01:46:21.505172] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:08:51.877 [2024-10-09 01:46:21.505271] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4034423 ] 00:08:52.136 [2024-10-09 01:46:21.582204] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:52.136 [2024-10-09 01:46:21.634091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.136 [2024-10-09 01:46:21.634094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.136 01:46:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.136 01:46:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:52.136 01:46:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:52.396 Malloc0 00:08:52.396 01:46:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:52.656 Malloc1 00:08:52.656 01:46:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.656 01:46:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:52.916 /dev/nbd0 00:08:52.916 01:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:52.916 01:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:52.916 1+0 records in 00:08:52.916 1+0 records out 00:08:52.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022889 s, 17.9 MB/s 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:52.916 01:46:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:52.916 01:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.916 01:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:52.916 01:46:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:53.175 /dev/nbd1 00:08:53.175 01:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:53.175 01:46:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:53.175 1+0 records in 00:08:53.175 1+0 records out 00:08:53.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023556 s, 17.4 MB/s 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:53.175 01:46:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:53.175 01:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.175 01:46:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:53.175 01:46:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.175 01:46:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.175 01:46:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:53.434 { 00:08:53.434 "nbd_device": "/dev/nbd0", 00:08:53.434 "bdev_name": "Malloc0" 00:08:53.434 }, 00:08:53.434 { 00:08:53.434 "nbd_device": "/dev/nbd1", 00:08:53.434 "bdev_name": "Malloc1" 00:08:53.434 } 00:08:53.434 ]' 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:53.434 { 00:08:53.434 "nbd_device": "/dev/nbd0", 00:08:53.434 "bdev_name": "Malloc0" 00:08:53.434 }, 00:08:53.434 { 00:08:53.434 "nbd_device": "/dev/nbd1", 00:08:53.434 "bdev_name": "Malloc1" 00:08:53.434 } 00:08:53.434 ]' 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:53.434 /dev/nbd1' 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:53.434 /dev/nbd1' 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:53.434 01:46:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:53.435 256+0 records in 00:08:53.435 256+0 records out 00:08:53.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00892874 s, 117 MB/s 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:53.435 256+0 records in 00:08:53.435 256+0 records out 00:08:53.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201573 s, 52.0 MB/s 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:53.435 256+0 records in 00:08:53.435 256+0 records out 00:08:53.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216535 s, 48.4 MB/s 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.435 01:46:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:53.694 01:46:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.953 01:46:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:54.212 01:46:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:54.212 01:46:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:54.471 01:46:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:54.471 [2024-10-09 01:46:24.096766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:54.731 [2024-10-09 01:46:24.141793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.731 [2024-10-09 01:46:24.141795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.731 [2024-10-09 01:46:24.188393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:54.731 [2024-10-09 01:46:24.188434] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:58.017 01:46:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:58.017 01:46:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:58.017 spdk_app_start Round 1 00:08:58.017 01:46:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4034423 /var/tmp/spdk-nbd.sock 00:08:58.017 01:46:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4034423 ']' 00:08:58.017 01:46:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:58.017 01:46:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.017 01:46:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:58.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:58.017 01:46:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.017 01:46:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:58.017 01:46:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.017 01:46:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:58.017 01:46:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:58.017 Malloc0 00:08:58.017 01:46:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:58.017 Malloc1 00:08:58.017 01:46:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.017 01:46:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:58.277 /dev/nbd0 00:08:58.277 01:46:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:58.277 01:46:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.277 1+0 records in 00:08:58.277 1+0 records out 00:08:58.277 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250094 s, 16.4 MB/s 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:58.277 01:46:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:58.277 01:46:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.277 01:46:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.277 01:46:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:58.536 /dev/nbd1 00:08:58.536 01:46:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:58.536 01:46:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:58.536 1+0 records in 00:08:58.536 1+0 records out 00:08:58.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261951 s, 15.6 MB/s 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:58.536 01:46:28 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:58.536 01:46:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:58.536 01:46:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:58.536 01:46:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.536 01:46:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.536 01:46:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:58.795 { 00:08:58.795 "nbd_device": "/dev/nbd0", 00:08:58.795 "bdev_name": "Malloc0" 00:08:58.795 }, 00:08:58.795 { 00:08:58.795 "nbd_device": "/dev/nbd1", 00:08:58.795 "bdev_name": "Malloc1" 00:08:58.795 } 00:08:58.795 ]' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:58.795 { 00:08:58.795 "nbd_device": "/dev/nbd0", 00:08:58.795 "bdev_name": "Malloc0" 00:08:58.795 }, 00:08:58.795 { 00:08:58.795 "nbd_device": "/dev/nbd1", 00:08:58.795 "bdev_name": "Malloc1" 00:08:58.795 } 00:08:58.795 ]' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:58.795 /dev/nbd1' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:58.795 /dev/nbd1' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:58.795 256+0 records in 00:08:58.795 256+0 records out 00:08:58.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105595 s, 99.3 MB/s 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:58.795 256+0 records in 00:08:58.795 256+0 records out 00:08:58.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202795 s, 51.7 MB/s 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:58.795 256+0 records in 00:08:58.795 256+0 records out 00:08:58.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218067 s, 48.1 MB/s 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.795 01:46:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.054 01:46:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.312 01:46:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.313 01:46:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:59.571 01:46:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:59.571 01:46:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:59.830 01:46:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:59.830 [2024-10-09 01:46:29.435195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.830 [2024-10-09 01:46:29.479496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.830 [2024-10-09 01:46:29.479498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.089 [2024-10-09 01:46:29.527087] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:00.089 [2024-10-09 01:46:29.527141] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:02.623 01:46:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:02.623 01:46:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:02.623 spdk_app_start Round 2 00:09:02.623 01:46:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 4034423 /var/tmp/spdk-nbd.sock 00:09:02.623 01:46:32 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4034423 ']' 00:09:02.623 01:46:32 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:02.623 01:46:32 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.623 01:46:32 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:02.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:02.623 01:46:32 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.623 01:46:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:02.881 01:46:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.881 01:46:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:02.881 01:46:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.140 Malloc0 00:09:03.140 01:46:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:03.398 Malloc1 00:09:03.398 01:46:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.398 01:46:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:03.398 /dev/nbd0 00:09:03.656 01:46:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:03.656 01:46:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:03.656 1+0 records in 00:09:03.656 1+0 records out 00:09:03.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212447 s, 19.3 MB/s 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:03.656 01:46:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:03.656 01:46:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.656 01:46:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.656 01:46:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:03.656 /dev/nbd1 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:03.914 1+0 records in 00:09:03.914 1+0 records out 00:09:03.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002551 s, 16.1 MB/s 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdtest 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:03.914 01:46:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:03.914 { 00:09:03.914 "nbd_device": "/dev/nbd0", 00:09:03.914 "bdev_name": "Malloc0" 00:09:03.914 }, 00:09:03.914 { 00:09:03.914 "nbd_device": "/dev/nbd1", 00:09:03.914 "bdev_name": "Malloc1" 00:09:03.914 } 00:09:03.914 ]' 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:03.914 { 00:09:03.914 "nbd_device": "/dev/nbd0", 00:09:03.914 "bdev_name": "Malloc0" 00:09:03.914 }, 00:09:03.914 { 00:09:03.914 "nbd_device": "/dev/nbd1", 00:09:03.914 "bdev_name": "Malloc1" 00:09:03.914 } 00:09:03.914 ]' 00:09:03.914 01:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:04.172 /dev/nbd1' 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:04.172 /dev/nbd1' 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:04.172 256+0 records in 00:09:04.172 256+0 records out 00:09:04.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114094 s, 91.9 MB/s 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:04.172 256+0 records in 00:09:04.172 256+0 records out 00:09:04.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202905 s, 51.7 MB/s 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:04.172 256+0 records in 00:09:04.172 256+0 records out 00:09:04.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218124 s, 48.1 MB/s 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/nbdrandtest 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.172 01:46:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.430 01:46:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.430 01:46:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.430 01:46:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.430 01:46:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.430 01:46:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.430 01:46:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.430 01:46:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:04.431 01:46:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.431 01:46:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.431 01:46:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:04.689 01:46:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:04.947 01:46:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:04.947 01:46:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:04.947 01:46:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:05.205 [2024-10-09 01:46:34.744155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.205 [2024-10-09 01:46:34.787861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.205 [2024-10-09 01:46:34.787863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.205 [2024-10-09 01:46:34.834346] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:05.205 [2024-10-09 01:46:34.834384] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:08.488 01:46:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 4034423 /var/tmp/spdk-nbd.sock 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 4034423 ']' 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:08.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:08.488 01:46:37 event.app_repeat -- event/event.sh@39 -- # killprocess 4034423 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 4034423 ']' 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 4034423 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4034423 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4034423' 00:09:08.488 killing process with pid 4034423 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@969 -- # kill 4034423 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@974 -- # wait 4034423 00:09:08.488 spdk_app_start is called in Round 0. 00:09:08.488 Shutdown signal received, stop current app iteration 00:09:08.488 Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 reinitialization... 00:09:08.488 spdk_app_start is called in Round 1. 00:09:08.488 Shutdown signal received, stop current app iteration 00:09:08.488 Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 reinitialization... 00:09:08.488 spdk_app_start is called in Round 2. 00:09:08.488 Shutdown signal received, stop current app iteration 00:09:08.488 Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 reinitialization... 00:09:08.488 spdk_app_start is called in Round 3. 00:09:08.488 Shutdown signal received, stop current app iteration 00:09:08.488 01:46:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:08.488 01:46:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:08.488 00:09:08.488 real 0m16.508s 00:09:08.488 user 0m35.523s 00:09:08.488 sys 0m3.262s 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:08.488 01:46:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:08.488 ************************************ 00:09:08.488 END TEST app_repeat 00:09:08.488 ************************************ 00:09:08.488 01:46:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:08.488 01:46:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:08.488 01:46:38 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.488 01:46:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.488 01:46:38 event -- common/autotest_common.sh@10 -- # set +x 00:09:08.488 ************************************ 00:09:08.488 START TEST cpu_locks 00:09:08.488 ************************************ 00:09:08.488 01:46:38 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:08.747 * Looking for test storage... 00:09:08.747 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/event 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.747 01:46:38 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.747 --rc genhtml_branch_coverage=1 00:09:08.747 --rc genhtml_function_coverage=1 00:09:08.747 --rc genhtml_legend=1 00:09:08.747 --rc geninfo_all_blocks=1 00:09:08.747 --rc geninfo_unexecuted_blocks=1 00:09:08.747 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:08.747 ' 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.747 --rc genhtml_branch_coverage=1 00:09:08.747 --rc genhtml_function_coverage=1 00:09:08.747 --rc genhtml_legend=1 00:09:08.747 --rc geninfo_all_blocks=1 00:09:08.747 --rc geninfo_unexecuted_blocks=1 00:09:08.747 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:08.747 ' 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.747 --rc genhtml_branch_coverage=1 00:09:08.747 --rc genhtml_function_coverage=1 00:09:08.747 --rc genhtml_legend=1 00:09:08.747 --rc geninfo_all_blocks=1 00:09:08.747 --rc geninfo_unexecuted_blocks=1 00:09:08.747 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:08.747 ' 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:08.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.747 --rc genhtml_branch_coverage=1 00:09:08.747 --rc genhtml_function_coverage=1 00:09:08.747 --rc genhtml_legend=1 00:09:08.747 --rc geninfo_all_blocks=1 00:09:08.747 --rc geninfo_unexecuted_blocks=1 00:09:08.747 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:08.747 ' 00:09:08.747 01:46:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:08.747 01:46:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:08.747 01:46:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:08.747 01:46:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:08.747 01:46:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.747 ************************************ 00:09:08.747 START TEST default_locks 00:09:08.747 ************************************ 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=4036813 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 4036813 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 4036813 ']' 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.747 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.747 [2024-10-09 01:46:38.332774] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:08.747 [2024-10-09 01:46:38.332852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4036813 ] 00:09:08.747 [2024-10-09 01:46:38.407705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.006 [2024-10-09 01:46:38.454917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.264 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.264 01:46:38 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:09:09.264 01:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 4036813 00:09:09.264 01:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 4036813 00:09:09.264 01:46:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:09.831 lslocks: write error 00:09:09.831 01:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 4036813 00:09:09.831 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 4036813 ']' 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 4036813 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4036813 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4036813' 00:09:09.832 killing process with pid 4036813 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 4036813 00:09:09.832 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 4036813 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 4036813 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4036813 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 4036813 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 4036813 ']' 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.091 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4036813) - No such process 00:09:10.091 ERROR: process (pid: 4036813) is no longer running 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:10.091 00:09:10.091 real 0m1.424s 00:09:10.091 user 0m1.391s 00:09:10.091 sys 0m0.715s 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.091 01:46:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.091 ************************************ 00:09:10.091 END TEST default_locks 00:09:10.091 ************************************ 00:09:10.350 01:46:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:10.350 01:46:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:10.350 01:46:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.350 01:46:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 ************************************ 00:09:10.350 START TEST default_locks_via_rpc 00:09:10.350 ************************************ 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=4037114 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 4037114 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4037114 ']' 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:10.350 01:46:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.350 [2024-10-09 01:46:39.841324] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:10.350 [2024-10-09 01:46:39.841391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037114 ] 00:09:10.350 [2024-10-09 01:46:39.914665] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.350 [2024-10-09 01:46:39.958574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 4037114 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 4037114 00:09:10.611 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:10.869 01:46:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 4037114 00:09:10.869 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 4037114 ']' 00:09:10.869 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 4037114 00:09:11.128 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:09:11.128 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.128 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4037114 00:09:11.128 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.128 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.128 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4037114' 00:09:11.128 killing process with pid 4037114 00:09:11.128 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 4037114 00:09:11.128 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 4037114 00:09:11.387 00:09:11.387 real 0m1.081s 00:09:11.387 user 0m1.033s 00:09:11.387 sys 0m0.532s 00:09:11.387 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.387 01:46:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.387 ************************************ 00:09:11.387 END TEST default_locks_via_rpc 00:09:11.387 ************************************ 00:09:11.387 01:46:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:11.387 01:46:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.387 01:46:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.387 01:46:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:11.387 ************************************ 00:09:11.387 START TEST non_locking_app_on_locked_coremask 00:09:11.387 ************************************ 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=4037238 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 4037238 /var/tmp/spdk.sock 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4037238 ']' 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.387 01:46:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.387 [2024-10-09 01:46:41.009903] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:11.387 [2024-10-09 01:46:41.009968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037238 ] 00:09:11.646 [2024-10-09 01:46:41.086143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.646 [2024-10-09 01:46:41.131040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=4037343 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 4037343 /var/tmp/spdk2.sock 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4037343 ']' 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:11.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.905 01:46:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:11.905 [2024-10-09 01:46:41.371831] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:11.905 [2024-10-09 01:46:41.371927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037343 ] 00:09:11.905 [2024-10-09 01:46:41.468012] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:11.905 [2024-10-09 01:46:41.468045] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.905 [2024-10-09 01:46:41.556964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.842 01:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.842 01:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:12.842 01:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 4037238 00:09:12.842 01:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4037238 00:09:12.842 01:46:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:13.778 lslocks: write error 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 4037238 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4037238 ']' 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4037238 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4037238 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4037238' 00:09:13.778 killing process with pid 4037238 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4037238 00:09:13.778 01:46:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4037238 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 4037343 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4037343 ']' 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4037343 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4037343 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4037343' 00:09:14.715 killing process with pid 4037343 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4037343 00:09:14.715 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4037343 00:09:14.974 00:09:14.974 real 0m3.419s 00:09:14.974 user 0m3.540s 00:09:14.974 sys 0m1.270s 00:09:14.974 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.974 01:46:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.974 ************************************ 00:09:14.974 END TEST non_locking_app_on_locked_coremask 00:09:14.974 ************************************ 00:09:14.974 01:46:44 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:14.974 01:46:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:14.974 01:46:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.974 01:46:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:14.974 ************************************ 00:09:14.974 START TEST locking_app_on_unlocked_coremask 00:09:14.974 ************************************ 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=4037737 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 4037737 /var/tmp/spdk.sock 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4037737 ']' 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.974 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:14.974 [2024-10-09 01:46:44.497518] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:14.974 [2024-10-09 01:46:44.497576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037737 ] 00:09:14.974 [2024-10-09 01:46:44.569772] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:14.974 [2024-10-09 01:46:44.569804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.974 [2024-10-09 01:46:44.618489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=4037740 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 4037740 /var/tmp/spdk2.sock 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4037740 ']' 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:15.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:15.235 01:46:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:15.235 [2024-10-09 01:46:44.873112] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:15.235 [2024-10-09 01:46:44.873186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4037740 ] 00:09:15.494 [2024-10-09 01:46:44.977674] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.494 [2024-10-09 01:46:45.076505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.158 01:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:16.158 01:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:16.158 01:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 4037740 00:09:16.158 01:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4037740 00:09:16.158 01:46:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:17.095 lslocks: write error 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 4037737 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4037737 ']' 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 4037737 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4037737 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4037737' 00:09:17.096 killing process with pid 4037737 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 4037737 00:09:17.096 01:46:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 4037737 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 4037740 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4037740 ']' 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 4037740 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4037740 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4037740' 00:09:17.665 killing process with pid 4037740 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 4037740 00:09:17.665 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 4037740 00:09:18.233 00:09:18.233 real 0m3.169s 00:09:18.233 user 0m3.269s 00:09:18.233 sys 0m1.175s 00:09:18.233 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.233 01:46:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 ************************************ 00:09:18.233 END TEST locking_app_on_unlocked_coremask 00:09:18.233 ************************************ 00:09:18.233 01:46:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:18.233 01:46:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.233 01:46:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.233 01:46:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:18.233 ************************************ 00:09:18.233 START TEST locking_app_on_locked_coremask 00:09:18.233 ************************************ 00:09:18.233 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=4038201 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 4038201 /var/tmp/spdk.sock 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4038201 ']' 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.234 01:46:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:18.234 [2024-10-09 01:46:47.719105] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:18.234 [2024-10-09 01:46:47.719148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038201 ] 00:09:18.234 [2024-10-09 01:46:47.791432] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.234 [2024-10-09 01:46:47.842179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=4038304 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 4038304 /var/tmp/spdk2.sock 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4038304 /var/tmp/spdk2.sock 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:18.492 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 4038304 /var/tmp/spdk2.sock 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 4038304 ']' 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:18.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.493 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:18.493 [2024-10-09 01:46:48.095508] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:18.493 [2024-10-09 01:46:48.095583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038304 ] 00:09:18.751 [2024-10-09 01:46:48.197367] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 4038201 has claimed it. 00:09:18.751 [2024-10-09 01:46:48.197409] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:19.320 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4038304) - No such process 00:09:19.320 ERROR: process (pid: 4038304) is no longer running 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 4038201 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 4038201 00:09:19.320 01:46:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:19.889 lslocks: write error 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 4038201 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 4038201 ']' 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 4038201 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4038201 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4038201' 00:09:19.889 killing process with pid 4038201 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 4038201 00:09:19.889 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 4038201 00:09:20.148 00:09:20.148 real 0m1.940s 00:09:20.148 user 0m2.057s 00:09:20.148 sys 0m0.683s 00:09:20.148 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.148 01:46:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.148 ************************************ 00:09:20.148 END TEST locking_app_on_locked_coremask 00:09:20.148 ************************************ 00:09:20.148 01:46:49 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:20.148 01:46:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.148 01:46:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.148 01:46:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:20.148 ************************************ 00:09:20.148 START TEST locking_overlapped_coremask 00:09:20.148 ************************************ 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=4038512 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 4038512 /var/tmp/spdk.sock 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 4038512 ']' 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.148 01:46:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.148 [2024-10-09 01:46:49.755114] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:20.148 [2024-10-09 01:46:49.755179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038512 ] 00:09:20.407 [2024-10-09 01:46:49.829614] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.407 [2024-10-09 01:46:49.881734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.407 [2024-10-09 01:46:49.881843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.407 [2024-10-09 01:46:49.881846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=4038540 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 4038540 /var/tmp/spdk2.sock 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 4038540 /var/tmp/spdk2.sock 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 4038540 /var/tmp/spdk2.sock 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 4038540 ']' 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:20.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:20.667 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.667 [2024-10-09 01:46:50.128159] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:20.667 [2024-10-09 01:46:50.128223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038540 ] 00:09:20.667 [2024-10-09 01:46:50.226378] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4038512 has claimed it. 00:09:20.667 [2024-10-09 01:46:50.226422] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:21.236 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (4038540) - No such process 00:09:21.236 ERROR: process (pid: 4038540) is no longer running 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 4038512 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 4038512 ']' 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 4038512 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4038512 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4038512' 00:09:21.236 killing process with pid 4038512 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 4038512 00:09:21.236 01:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 4038512 00:09:21.805 00:09:21.805 real 0m1.477s 00:09:21.805 user 0m4.061s 00:09:21.805 sys 0m0.451s 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:21.805 ************************************ 00:09:21.805 END TEST locking_overlapped_coremask 00:09:21.805 ************************************ 00:09:21.805 01:46:51 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:21.805 01:46:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:21.805 01:46:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:21.805 01:46:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:21.805 ************************************ 00:09:21.805 START TEST locking_overlapped_coremask_via_rpc 00:09:21.805 ************************************ 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=4038742 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 4038742 /var/tmp/spdk.sock 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4038742 ']' 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.805 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:21.805 [2024-10-09 01:46:51.315131] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:21.805 [2024-10-09 01:46:51.315214] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038742 ] 00:09:21.805 [2024-10-09 01:46:51.389376] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:21.805 [2024-10-09 01:46:51.389412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:21.805 [2024-10-09 01:46:51.439830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.805 [2024-10-09 01:46:51.439890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.805 [2024-10-09 01:46:51.439893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=4038792 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 4038792 /var/tmp/spdk2.sock 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4038792 ']' 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:22.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:22.065 01:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.065 [2024-10-09 01:46:51.682108] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:22.065 [2024-10-09 01:46:51.682192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4038792 ] 00:09:22.325 [2024-10-09 01:46:51.790346] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:22.325 [2024-10-09 01:46:51.790380] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:22.325 [2024-10-09 01:46:51.887052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.325 [2024-10-09 01:46:51.887158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.325 [2024-10-09 01:46:51.887160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.263 [2024-10-09 01:46:52.589881] app.c: 780:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 4038742 has claimed it. 00:09:23.263 request: 00:09:23.263 { 00:09:23.263 "method": "framework_enable_cpumask_locks", 00:09:23.263 "req_id": 1 00:09:23.263 } 00:09:23.263 Got JSON-RPC error response 00:09:23.263 response: 00:09:23.263 { 00:09:23.263 "code": -32603, 00:09:23.263 "message": "Failed to claim CPU core: 2" 00:09:23.263 } 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 4038742 /var/tmp/spdk.sock 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4038742 ']' 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 4038792 /var/tmp/spdk2.sock 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 4038792 ']' 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:23.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.263 01:46:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.523 01:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:23.523 01:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:23.523 01:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:23.523 01:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:23.523 01:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:23.523 01:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:23.523 00:09:23.523 real 0m1.729s 00:09:23.523 user 0m0.835s 00:09:23.523 sys 0m0.156s 00:09:23.523 01:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.523 01:46:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:23.523 ************************************ 00:09:23.523 END TEST locking_overlapped_coremask_via_rpc 00:09:23.523 ************************************ 00:09:23.523 01:46:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:23.523 01:46:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4038742 ]] 00:09:23.523 01:46:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4038742 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4038742 ']' 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4038742 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4038742 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4038742' 00:09:23.523 killing process with pid 4038742 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 4038742 00:09:23.523 01:46:53 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 4038742 00:09:24.100 01:46:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4038792 ]] 00:09:24.100 01:46:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4038792 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4038792 ']' 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4038792 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4038792 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4038792' 00:09:24.100 killing process with pid 4038792 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 4038792 00:09:24.100 01:46:53 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 4038792 00:09:24.359 01:46:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:24.359 01:46:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:24.359 01:46:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 4038742 ]] 00:09:24.359 01:46:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 4038742 00:09:24.359 01:46:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4038742 ']' 00:09:24.359 01:46:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4038742 00:09:24.359 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4038742) - No such process 00:09:24.359 01:46:53 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 4038742 is not found' 00:09:24.359 Process with pid 4038742 is not found 00:09:24.359 01:46:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 4038792 ]] 00:09:24.359 01:46:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 4038792 00:09:24.359 01:46:53 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 4038792 ']' 00:09:24.359 01:46:53 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 4038792 00:09:24.359 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (4038792) - No such process 00:09:24.359 01:46:53 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 4038792 is not found' 00:09:24.359 Process with pid 4038792 is not found 00:09:24.359 01:46:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:24.359 00:09:24.359 real 0m15.797s 00:09:24.359 user 0m26.404s 00:09:24.359 sys 0m6.092s 00:09:24.359 01:46:53 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.359 01:46:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:24.359 ************************************ 00:09:24.359 END TEST cpu_locks 00:09:24.359 ************************************ 00:09:24.359 00:09:24.359 real 0m41.003s 00:09:24.359 user 1m16.356s 00:09:24.359 sys 0m10.485s 00:09:24.359 01:46:53 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.359 01:46:53 event -- common/autotest_common.sh@10 -- # set +x 00:09:24.359 ************************************ 00:09:24.359 END TEST event 00:09:24.359 ************************************ 00:09:24.359 01:46:53 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:09:24.359 01:46:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.359 01:46:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.359 01:46:53 -- common/autotest_common.sh@10 -- # set +x 00:09:24.359 ************************************ 00:09:24.359 START TEST thread 00:09:24.359 ************************************ 00:09:24.359 01:46:53 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/thread.sh 00:09:24.619 * Looking for test storage... 00:09:24.619 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:24.619 01:46:54 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.619 01:46:54 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.619 01:46:54 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.619 01:46:54 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.619 01:46:54 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.619 01:46:54 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.619 01:46:54 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.619 01:46:54 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.619 01:46:54 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.619 01:46:54 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.619 01:46:54 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.619 01:46:54 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:24.619 01:46:54 thread -- scripts/common.sh@345 -- # : 1 00:09:24.619 01:46:54 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.619 01:46:54 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.619 01:46:54 thread -- scripts/common.sh@365 -- # decimal 1 00:09:24.619 01:46:54 thread -- scripts/common.sh@353 -- # local d=1 00:09:24.619 01:46:54 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.619 01:46:54 thread -- scripts/common.sh@355 -- # echo 1 00:09:24.619 01:46:54 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.619 01:46:54 thread -- scripts/common.sh@366 -- # decimal 2 00:09:24.619 01:46:54 thread -- scripts/common.sh@353 -- # local d=2 00:09:24.619 01:46:54 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.619 01:46:54 thread -- scripts/common.sh@355 -- # echo 2 00:09:24.619 01:46:54 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.619 01:46:54 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.619 01:46:54 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.619 01:46:54 thread -- scripts/common.sh@368 -- # return 0 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:24.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.619 --rc genhtml_branch_coverage=1 00:09:24.619 --rc genhtml_function_coverage=1 00:09:24.619 --rc genhtml_legend=1 00:09:24.619 --rc geninfo_all_blocks=1 00:09:24.619 --rc geninfo_unexecuted_blocks=1 00:09:24.619 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:24.619 ' 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:24.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.619 --rc genhtml_branch_coverage=1 00:09:24.619 --rc genhtml_function_coverage=1 00:09:24.619 --rc genhtml_legend=1 00:09:24.619 --rc geninfo_all_blocks=1 00:09:24.619 --rc geninfo_unexecuted_blocks=1 00:09:24.619 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:24.619 ' 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:24.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.619 --rc genhtml_branch_coverage=1 00:09:24.619 --rc genhtml_function_coverage=1 00:09:24.619 --rc genhtml_legend=1 00:09:24.619 --rc geninfo_all_blocks=1 00:09:24.619 --rc geninfo_unexecuted_blocks=1 00:09:24.619 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:24.619 ' 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:24.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.619 --rc genhtml_branch_coverage=1 00:09:24.619 --rc genhtml_function_coverage=1 00:09:24.619 --rc genhtml_legend=1 00:09:24.619 --rc geninfo_all_blocks=1 00:09:24.619 --rc geninfo_unexecuted_blocks=1 00:09:24.619 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:24.619 ' 00:09:24.619 01:46:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:24.619 01:46:54 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.620 01:46:54 thread -- common/autotest_common.sh@10 -- # set +x 00:09:24.620 ************************************ 00:09:24.620 START TEST thread_poller_perf 00:09:24.620 ************************************ 00:09:24.620 01:46:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:24.620 [2024-10-09 01:46:54.217921] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:24.620 [2024-10-09 01:46:54.218015] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039211 ] 00:09:24.879 [2024-10-09 01:46:54.293627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.879 [2024-10-09 01:46:54.339289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.879 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:25.815 [2024-10-08T23:46:55.482Z] ====================================== 00:09:25.815 [2024-10-08T23:46:55.482Z] busy:2304216100 (cyc) 00:09:25.815 [2024-10-08T23:46:55.482Z] total_run_count: 846000 00:09:25.815 [2024-10-08T23:46:55.482Z] tsc_hz: 2300000000 (cyc) 00:09:25.815 [2024-10-08T23:46:55.482Z] ====================================== 00:09:25.815 [2024-10-08T23:46:55.482Z] poller_cost: 2723 (cyc), 1183 (nsec) 00:09:25.815 00:09:25.815 real 0m1.181s 00:09:25.815 user 0m1.093s 00:09:25.815 sys 0m0.083s 00:09:25.815 01:46:55 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:25.815 01:46:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:25.815 ************************************ 00:09:25.815 END TEST thread_poller_perf 00:09:25.815 ************************************ 00:09:25.815 01:46:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:25.815 01:46:55 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:09:25.815 01:46:55 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:25.815 01:46:55 thread -- common/autotest_common.sh@10 -- # set +x 00:09:25.815 ************************************ 00:09:25.815 START TEST thread_poller_perf 00:09:25.815 ************************************ 00:09:25.815 01:46:55 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:25.815 [2024-10-09 01:46:55.465228] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:25.815 [2024-10-09 01:46:55.465273] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039405 ] 00:09:26.074 [2024-10-09 01:46:55.533370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.075 [2024-10-09 01:46:55.578328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.075 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:27.012 [2024-10-08T23:46:56.679Z] ====================================== 00:09:27.012 [2024-10-08T23:46:56.679Z] busy:2301268348 (cyc) 00:09:27.012 [2024-10-08T23:46:56.679Z] total_run_count: 13298000 00:09:27.012 [2024-10-08T23:46:56.679Z] tsc_hz: 2300000000 (cyc) 00:09:27.012 [2024-10-08T23:46:56.679Z] ====================================== 00:09:27.012 [2024-10-08T23:46:56.679Z] poller_cost: 173 (cyc), 75 (nsec) 00:09:27.012 00:09:27.012 real 0m1.160s 00:09:27.012 user 0m1.083s 00:09:27.012 sys 0m0.073s 00:09:27.012 01:46:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.012 01:46:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:27.012 ************************************ 00:09:27.012 END TEST thread_poller_perf 00:09:27.012 ************************************ 00:09:27.012 01:46:56 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:27.012 01:46:56 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:09:27.012 01:46:56 thread -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.012 01:46:56 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.012 01:46:56 thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.271 ************************************ 00:09:27.271 START TEST thread_spdk_lock 00:09:27.271 ************************************ 00:09:27.271 01:46:56 thread.thread_spdk_lock -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock 00:09:27.271 [2024-10-09 01:46:56.701512] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:27.271 [2024-10-09 01:46:56.701557] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039597 ] 00:09:27.271 [2024-10-09 01:46:56.771443] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.271 [2024-10-09 01:46:56.818274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.271 [2024-10-09 01:46:56.818277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.839 [2024-10-09 01:46:57.313982] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:27.839 [2024-10-09 01:46:57.314018] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3099:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:27.839 [2024-10-09 01:46:57.314029] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:3054:sspin_stacks_print: *ERROR*: spinlock 0x14c6580 00:09:27.839 [2024-10-09 01:46:57.314788] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:27.839 [2024-10-09 01:46:57.314894] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:27.839 [2024-10-09 01:46:57.314916] /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:27.839 Starting test contend 00:09:27.839 Worker Delay Wait us Hold us Total us 00:09:27.839 0 3 172141 187298 359439 00:09:27.839 1 5 88231 289301 377532 00:09:27.839 PASS test contend 00:09:27.839 Starting test hold_by_poller 00:09:27.839 PASS test hold_by_poller 00:09:27.839 Starting test hold_by_message 00:09:27.839 PASS test hold_by_message 00:09:27.839 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/thread/lock/spdk_lock summary: 00:09:27.839 100014 assertions passed 00:09:27.839 0 assertions failed 00:09:27.839 00:09:27.839 real 0m0.661s 00:09:27.839 user 0m1.075s 00:09:27.839 sys 0m0.079s 00:09:27.839 01:46:57 thread.thread_spdk_lock -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.839 01:46:57 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:09:27.839 ************************************ 00:09:27.839 END TEST thread_spdk_lock 00:09:27.839 ************************************ 00:09:27.839 00:09:27.839 real 0m3.409s 00:09:27.839 user 0m3.442s 00:09:27.839 sys 0m0.475s 00:09:27.839 01:46:57 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.839 01:46:57 thread -- common/autotest_common.sh@10 -- # set +x 00:09:27.839 ************************************ 00:09:27.839 END TEST thread 00:09:27.839 ************************************ 00:09:27.839 01:46:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:27.839 01:46:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:09:27.839 01:46:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.839 01:46:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.839 01:46:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.839 ************************************ 00:09:27.839 START TEST app_cmdline 00:09:27.839 ************************************ 00:09:27.839 01:46:57 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/cmdline.sh 00:09:28.099 * Looking for test storage... 00:09:28.099 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:28.099 01:46:57 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:28.099 01:46:57 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:09:28.099 01:46:57 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:28.099 01:46:57 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.099 01:46:57 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:28.100 01:46:57 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.100 01:46:57 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.100 01:46:57 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.100 01:46:57 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:28.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.100 --rc genhtml_branch_coverage=1 00:09:28.100 --rc genhtml_function_coverage=1 00:09:28.100 --rc genhtml_legend=1 00:09:28.100 --rc geninfo_all_blocks=1 00:09:28.100 --rc geninfo_unexecuted_blocks=1 00:09:28.100 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:28.100 ' 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:28.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.100 --rc genhtml_branch_coverage=1 00:09:28.100 --rc genhtml_function_coverage=1 00:09:28.100 --rc genhtml_legend=1 00:09:28.100 --rc geninfo_all_blocks=1 00:09:28.100 --rc geninfo_unexecuted_blocks=1 00:09:28.100 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:28.100 ' 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:28.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.100 --rc genhtml_branch_coverage=1 00:09:28.100 --rc genhtml_function_coverage=1 00:09:28.100 --rc genhtml_legend=1 00:09:28.100 --rc geninfo_all_blocks=1 00:09:28.100 --rc geninfo_unexecuted_blocks=1 00:09:28.100 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:28.100 ' 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:28.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.100 --rc genhtml_branch_coverage=1 00:09:28.100 --rc genhtml_function_coverage=1 00:09:28.100 --rc genhtml_legend=1 00:09:28.100 --rc geninfo_all_blocks=1 00:09:28.100 --rc geninfo_unexecuted_blocks=1 00:09:28.100 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:28.100 ' 00:09:28.100 01:46:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:28.100 01:46:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=4039833 00:09:28.100 01:46:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 4039833 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 4039833 ']' 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.100 01:46:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:28.100 01:46:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:28.100 [2024-10-09 01:46:57.684791] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:28.100 [2024-10-09 01:46:57.684869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4039833 ] 00:09:28.100 [2024-10-09 01:46:57.758570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.359 [2024-10-09 01:46:57.809245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:28.619 { 00:09:28.619 "version": "SPDK v25.01-pre git sha1 3164389d2", 00:09:28.619 "fields": { 00:09:28.619 "major": 25, 00:09:28.619 "minor": 1, 00:09:28.619 "patch": 0, 00:09:28.619 "suffix": "-pre", 00:09:28.619 "commit": "3164389d2" 00:09:28.619 } 00:09:28.619 } 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:28.619 01:46:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py ]] 00:09:28.619 01:46:58 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:28.878 request: 00:09:28.878 { 00:09:28.878 "method": "env_dpdk_get_mem_stats", 00:09:28.878 "req_id": 1 00:09:28.878 } 00:09:28.878 Got JSON-RPC error response 00:09:28.879 response: 00:09:28.879 { 00:09:28.879 "code": -32601, 00:09:28.879 "message": "Method not found" 00:09:28.879 } 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:28.879 01:46:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 4039833 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 4039833 ']' 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 4039833 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 4039833 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 4039833' 00:09:28.879 killing process with pid 4039833 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@969 -- # kill 4039833 00:09:28.879 01:46:58 app_cmdline -- common/autotest_common.sh@974 -- # wait 4039833 00:09:29.447 00:09:29.447 real 0m1.354s 00:09:29.447 user 0m1.509s 00:09:29.447 sys 0m0.525s 00:09:29.447 01:46:58 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.447 01:46:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:29.447 ************************************ 00:09:29.447 END TEST app_cmdline 00:09:29.447 ************************************ 00:09:29.447 01:46:58 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:09:29.447 01:46:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.448 01:46:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.448 01:46:58 -- common/autotest_common.sh@10 -- # set +x 00:09:29.448 ************************************ 00:09:29.448 START TEST version 00:09:29.448 ************************************ 00:09:29.448 01:46:58 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/version.sh 00:09:29.448 * Looking for test storage... 00:09:29.448 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:29.448 01:46:58 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.448 01:46:58 version -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.448 01:46:58 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:29.448 01:46:59 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:29.448 01:46:59 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.448 01:46:59 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.448 01:46:59 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.448 01:46:59 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.448 01:46:59 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.448 01:46:59 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.448 01:46:59 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.448 01:46:59 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.448 01:46:59 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.448 01:46:59 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.448 01:46:59 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.448 01:46:59 version -- scripts/common.sh@344 -- # case "$op" in 00:09:29.448 01:46:59 version -- scripts/common.sh@345 -- # : 1 00:09:29.448 01:46:59 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.448 01:46:59 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.448 01:46:59 version -- scripts/common.sh@365 -- # decimal 1 00:09:29.448 01:46:59 version -- scripts/common.sh@353 -- # local d=1 00:09:29.448 01:46:59 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.448 01:46:59 version -- scripts/common.sh@355 -- # echo 1 00:09:29.448 01:46:59 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.448 01:46:59 version -- scripts/common.sh@366 -- # decimal 2 00:09:29.448 01:46:59 version -- scripts/common.sh@353 -- # local d=2 00:09:29.448 01:46:59 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.448 01:46:59 version -- scripts/common.sh@355 -- # echo 2 00:09:29.448 01:46:59 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.448 01:46:59 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.448 01:46:59 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.448 01:46:59 version -- scripts/common.sh@368 -- # return 0 00:09:29.448 01:46:59 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.448 01:46:59 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:29.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.448 --rc genhtml_branch_coverage=1 00:09:29.448 --rc genhtml_function_coverage=1 00:09:29.448 --rc genhtml_legend=1 00:09:29.448 --rc geninfo_all_blocks=1 00:09:29.448 --rc geninfo_unexecuted_blocks=1 00:09:29.448 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:29.448 ' 00:09:29.448 01:46:59 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:29.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.448 --rc genhtml_branch_coverage=1 00:09:29.448 --rc genhtml_function_coverage=1 00:09:29.448 --rc genhtml_legend=1 00:09:29.448 --rc geninfo_all_blocks=1 00:09:29.448 --rc geninfo_unexecuted_blocks=1 00:09:29.448 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:29.448 ' 00:09:29.448 01:46:59 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:29.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.448 --rc genhtml_branch_coverage=1 00:09:29.448 --rc genhtml_function_coverage=1 00:09:29.448 --rc genhtml_legend=1 00:09:29.448 --rc geninfo_all_blocks=1 00:09:29.448 --rc geninfo_unexecuted_blocks=1 00:09:29.448 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:29.448 ' 00:09:29.448 01:46:59 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:29.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.448 --rc genhtml_branch_coverage=1 00:09:29.448 --rc genhtml_function_coverage=1 00:09:29.448 --rc genhtml_legend=1 00:09:29.448 --rc geninfo_all_blocks=1 00:09:29.448 --rc geninfo_unexecuted_blocks=1 00:09:29.448 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:29.448 ' 00:09:29.448 01:46:59 version -- app/version.sh@17 -- # get_header_version major 00:09:29.448 01:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:09:29.448 01:46:59 version -- app/version.sh@14 -- # cut -f2 00:09:29.448 01:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:29.448 01:46:59 version -- app/version.sh@17 -- # major=25 00:09:29.448 01:46:59 version -- app/version.sh@18 -- # get_header_version minor 00:09:29.448 01:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:29.448 01:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:09:29.448 01:46:59 version -- app/version.sh@14 -- # cut -f2 00:09:29.448 01:46:59 version -- app/version.sh@18 -- # minor=1 00:09:29.448 01:46:59 version -- app/version.sh@19 -- # get_header_version patch 00:09:29.448 01:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:09:29.448 01:46:59 version -- app/version.sh@14 -- # cut -f2 00:09:29.448 01:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:29.448 01:46:59 version -- app/version.sh@19 -- # patch=0 00:09:29.448 01:46:59 version -- app/version.sh@20 -- # get_header_version suffix 00:09:29.448 01:46:59 version -- app/version.sh@14 -- # cut -f2 00:09:29.448 01:46:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/version.h 00:09:29.448 01:46:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:29.708 01:46:59 version -- app/version.sh@20 -- # suffix=-pre 00:09:29.708 01:46:59 version -- app/version.sh@22 -- # version=25.1 00:09:29.708 01:46:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:29.708 01:46:59 version -- app/version.sh@28 -- # version=25.1rc0 00:09:29.708 01:46:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:29.708 01:46:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:29.708 01:46:59 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:29.708 01:46:59 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:29.708 00:09:29.708 real 0m0.257s 00:09:29.708 user 0m0.129s 00:09:29.708 sys 0m0.167s 00:09:29.708 01:46:59 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.708 01:46:59 version -- common/autotest_common.sh@10 -- # set +x 00:09:29.708 ************************************ 00:09:29.708 END TEST version 00:09:29.708 ************************************ 00:09:29.708 01:46:59 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:29.708 01:46:59 -- spdk/autotest.sh@194 -- # uname -s 00:09:29.708 01:46:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:29.708 01:46:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:29.708 01:46:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:29.708 01:46:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:29.708 01:46:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.708 01:46:59 -- common/autotest_common.sh@10 -- # set +x 00:09:29.708 01:46:59 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:09:29.708 01:46:59 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:09:29.708 01:46:59 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:09:29.708 01:46:59 -- spdk/autotest.sh@370 -- # [[ 1 -eq 1 ]] 00:09:29.708 01:46:59 -- spdk/autotest.sh@371 -- # run_test llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:09:29.708 01:46:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.708 01:46:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.708 01:46:59 -- common/autotest_common.sh@10 -- # set +x 00:09:29.708 ************************************ 00:09:29.708 START TEST llvm_fuzz 00:09:29.708 ************************************ 00:09:29.708 01:46:59 llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm.sh 00:09:29.708 * Looking for test storage... 00:09:29.708 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz 00:09:29.708 01:46:59 llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.708 01:46:59 llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.708 01:46:59 llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:29.968 01:46:59 llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:29.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.968 --rc genhtml_branch_coverage=1 00:09:29.968 --rc genhtml_function_coverage=1 00:09:29.968 --rc genhtml_legend=1 00:09:29.968 --rc geninfo_all_blocks=1 00:09:29.968 --rc geninfo_unexecuted_blocks=1 00:09:29.968 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:29.968 ' 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:29.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.968 --rc genhtml_branch_coverage=1 00:09:29.968 --rc genhtml_function_coverage=1 00:09:29.968 --rc genhtml_legend=1 00:09:29.968 --rc geninfo_all_blocks=1 00:09:29.968 --rc geninfo_unexecuted_blocks=1 00:09:29.968 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:29.968 ' 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:29.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.968 --rc genhtml_branch_coverage=1 00:09:29.968 --rc genhtml_function_coverage=1 00:09:29.968 --rc genhtml_legend=1 00:09:29.968 --rc geninfo_all_blocks=1 00:09:29.968 --rc geninfo_unexecuted_blocks=1 00:09:29.968 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:29.968 ' 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:29.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:29.968 --rc genhtml_branch_coverage=1 00:09:29.968 --rc genhtml_function_coverage=1 00:09:29.968 --rc genhtml_legend=1 00:09:29.968 --rc geninfo_all_blocks=1 00:09:29.968 --rc geninfo_unexecuted_blocks=1 00:09:29.968 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:29.968 ' 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@11 -- # fuzzers=($(get_fuzzer_targets)) 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@11 -- # get_fuzzer_targets 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@548 -- # fuzzers=() 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@548 -- # local fuzzers 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@550 -- # [[ -n '' ]] 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@553 -- # fuzzers=("$rootdir/test/fuzz/llvm/"*) 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@554 -- # fuzzers=("${fuzzers[@]##*/}") 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@557 -- # echo 'common.sh llvm-gcov.sh nvmf vfio' 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@13 -- # llvm_out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@15 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:09:29.968 01:46:59 llvm_fuzz -- fuzz/llvm.sh@19 -- # run_test nvmf_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.968 01:46:59 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:09:29.968 ************************************ 00:09:29.968 START TEST nvmf_llvm_fuzz 00:09:29.968 ************************************ 00:09:29.968 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/run.sh 00:09:29.968 * Looking for test storage... 00:09:29.968 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:29.968 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:29.968 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:09:29.968 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:30.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.230 --rc genhtml_branch_coverage=1 00:09:30.230 --rc genhtml_function_coverage=1 00:09:30.230 --rc genhtml_legend=1 00:09:30.230 --rc geninfo_all_blocks=1 00:09:30.230 --rc geninfo_unexecuted_blocks=1 00:09:30.230 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.230 ' 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:30.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.230 --rc genhtml_branch_coverage=1 00:09:30.230 --rc genhtml_function_coverage=1 00:09:30.230 --rc genhtml_legend=1 00:09:30.230 --rc geninfo_all_blocks=1 00:09:30.230 --rc geninfo_unexecuted_blocks=1 00:09:30.230 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.230 ' 00:09:30.230 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:30.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.230 --rc genhtml_branch_coverage=1 00:09:30.230 --rc genhtml_function_coverage=1 00:09:30.230 --rc genhtml_legend=1 00:09:30.230 --rc geninfo_all_blocks=1 00:09:30.231 --rc geninfo_unexecuted_blocks=1 00:09:30.231 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.231 ' 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:30.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.231 --rc genhtml_branch_coverage=1 00:09:30.231 --rc genhtml_function_coverage=1 00:09:30.231 --rc genhtml_legend=1 00:09:30.231 --rc geninfo_all_blocks=1 00:09:30.231 --rc geninfo_unexecuted_blocks=1 00:09:30.231 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.231 ' 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@60 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:30.231 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:30.232 #define SPDK_CONFIG_H 00:09:30.232 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:30.232 #define SPDK_CONFIG_APPS 1 00:09:30.232 #define SPDK_CONFIG_ARCH native 00:09:30.232 #undef SPDK_CONFIG_ASAN 00:09:30.232 #undef SPDK_CONFIG_AVAHI 00:09:30.232 #undef SPDK_CONFIG_CET 00:09:30.232 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:30.232 #define SPDK_CONFIG_COVERAGE 1 00:09:30.232 #define SPDK_CONFIG_CROSS_PREFIX 00:09:30.232 #undef SPDK_CONFIG_CRYPTO 00:09:30.232 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:30.232 #undef SPDK_CONFIG_CUSTOMOCF 00:09:30.232 #undef SPDK_CONFIG_DAOS 00:09:30.232 #define SPDK_CONFIG_DAOS_DIR 00:09:30.232 #define SPDK_CONFIG_DEBUG 1 00:09:30.232 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:30.232 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:09:30.232 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:30.232 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:30.232 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:30.232 #undef SPDK_CONFIG_DPDK_UADK 00:09:30.232 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:09:30.232 #define SPDK_CONFIG_EXAMPLES 1 00:09:30.232 #undef SPDK_CONFIG_FC 00:09:30.232 #define SPDK_CONFIG_FC_PATH 00:09:30.232 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:30.232 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:30.232 #define SPDK_CONFIG_FSDEV 1 00:09:30.232 #undef SPDK_CONFIG_FUSE 00:09:30.232 #define SPDK_CONFIG_FUZZER 1 00:09:30.232 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:09:30.232 #undef SPDK_CONFIG_GOLANG 00:09:30.232 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:30.232 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:30.232 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:30.232 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:30.232 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:30.232 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:30.232 #undef SPDK_CONFIG_HAVE_LZ4 00:09:30.232 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:30.232 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:30.232 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:30.232 #define SPDK_CONFIG_IDXD 1 00:09:30.232 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:30.232 #undef SPDK_CONFIG_IPSEC_MB 00:09:30.232 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:30.232 #define SPDK_CONFIG_ISAL 1 00:09:30.232 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:30.232 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:30.232 #define SPDK_CONFIG_LIBDIR 00:09:30.232 #undef SPDK_CONFIG_LTO 00:09:30.232 #define SPDK_CONFIG_MAX_LCORES 128 00:09:30.232 #define SPDK_CONFIG_NVME_CUSE 1 00:09:30.232 #undef SPDK_CONFIG_OCF 00:09:30.232 #define SPDK_CONFIG_OCF_PATH 00:09:30.232 #define SPDK_CONFIG_OPENSSL_PATH 00:09:30.232 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:30.232 #define SPDK_CONFIG_PGO_DIR 00:09:30.232 #undef SPDK_CONFIG_PGO_USE 00:09:30.232 #define SPDK_CONFIG_PREFIX /usr/local 00:09:30.232 #undef SPDK_CONFIG_RAID5F 00:09:30.232 #undef SPDK_CONFIG_RBD 00:09:30.232 #define SPDK_CONFIG_RDMA 1 00:09:30.232 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:30.232 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:30.232 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:30.232 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:30.232 #undef SPDK_CONFIG_SHARED 00:09:30.232 #undef SPDK_CONFIG_SMA 00:09:30.232 #define SPDK_CONFIG_TESTS 1 00:09:30.232 #undef SPDK_CONFIG_TSAN 00:09:30.232 #define SPDK_CONFIG_UBLK 1 00:09:30.232 #define SPDK_CONFIG_UBSAN 1 00:09:30.232 #undef SPDK_CONFIG_UNIT_TESTS 00:09:30.232 #undef SPDK_CONFIG_URING 00:09:30.232 #define SPDK_CONFIG_URING_PATH 00:09:30.232 #undef SPDK_CONFIG_URING_ZNS 00:09:30.232 #undef SPDK_CONFIG_USDT 00:09:30.232 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:30.232 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:30.232 #define SPDK_CONFIG_VFIO_USER 1 00:09:30.232 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:30.232 #define SPDK_CONFIG_VHOST 1 00:09:30.232 #define SPDK_CONFIG_VIRTIO 1 00:09:30.232 #undef SPDK_CONFIG_VTUNE 00:09:30.232 #define SPDK_CONFIG_VTUNE_DIR 00:09:30.232 #define SPDK_CONFIG_WERROR 1 00:09:30.232 #define SPDK_CONFIG_WPDK_DIR 00:09:30.232 #undef SPDK_CONFIG_XNVME 00:09:30.232 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # uname -s 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:30.232 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:30.233 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 4040193 ]] 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 4040193 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.m5GSb0 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf /tmp/spdk.m5GSb0/tests/nvmf /tmp/spdk.m5GSb0 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86313201664 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500294656 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8187092992 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47246716928 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=3428352 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894155776 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900062208 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5906432 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249555456 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250149376 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=593920 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:30.234 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:30.235 * Looking for test storage... 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86313201664 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10401685504 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:30.235 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:30.235 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:30.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.494 --rc genhtml_branch_coverage=1 00:09:30.494 --rc genhtml_function_coverage=1 00:09:30.494 --rc genhtml_legend=1 00:09:30.494 --rc geninfo_all_blocks=1 00:09:30.494 --rc geninfo_unexecuted_blocks=1 00:09:30.494 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.494 ' 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:30.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.494 --rc genhtml_branch_coverage=1 00:09:30.494 --rc genhtml_function_coverage=1 00:09:30.494 --rc genhtml_legend=1 00:09:30.494 --rc geninfo_all_blocks=1 00:09:30.494 --rc geninfo_unexecuted_blocks=1 00:09:30.494 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.494 ' 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:30.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.494 --rc genhtml_branch_coverage=1 00:09:30.494 --rc genhtml_function_coverage=1 00:09:30.494 --rc genhtml_legend=1 00:09:30.494 --rc geninfo_all_blocks=1 00:09:30.494 --rc geninfo_unexecuted_blocks=1 00:09:30.494 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.494 ' 00:09:30.494 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:30.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.494 --rc genhtml_branch_coverage=1 00:09:30.495 --rc genhtml_function_coverage=1 00:09:30.495 --rc genhtml_legend=1 00:09:30.495 --rc geninfo_all_blocks=1 00:09:30.495 --rc geninfo_unexecuted_blocks=1 00:09:30.495 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:09:30.495 ' 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@61 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/../common.sh 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@63 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@64 -- # fuzz_num=25 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@65 -- # (( fuzz_num != 0 )) 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@67 -- # trap 'cleanup /tmp/llvm_fuzz* /var/tmp/suppress_nvmf_fuzz; exit 1' SIGINT SIGTERM EXIT 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@69 -- # mem_size=512 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@70 -- # [[ 1 -eq 1 ]] 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@71 -- # start_llvm_fuzz_short 25 1 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=25 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=0 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_0.conf 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 0 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4400 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4400"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:30.495 01:46:59 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4400' -c /tmp/fuzz_json_0.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 -Z 0 00:09:30.495 [2024-10-09 01:46:59.979457] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:30.495 [2024-10-09 01:46:59.979511] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040406 ] 00:09:30.753 [2024-10-09 01:47:00.182716] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.753 [2024-10-09 01:47:00.222469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.753 [2024-10-09 01:47:00.281839] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.753 [2024-10-09 01:47:00.298037] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4400 *** 00:09:30.753 INFO: Running with entropic power schedule (0xFF, 100). 00:09:30.753 INFO: Seed: 2311103083 00:09:30.754 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:30.754 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:30.754 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_0 00:09:30.754 INFO: A corpus is not provided, starting from an empty corpus 00:09:30.754 #2 INITED exec/s: 0 rss: 66Mb 00:09:30.754 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:30.754 This may also happen if the target rejected all inputs we tried so far 00:09:30.754 [2024-10-09 01:47:00.363458] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:30.754 [2024-10-09 01:47:00.363488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.012 NEW_FUNC[1/714]: 0x43bbc8 in fuzz_admin_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:47 00:09:31.012 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:31.271 #7 NEW cov: 12034 ft: 12029 corp: 2/84b lim: 320 exec/s: 0 rss: 73Mb L: 83/83 MS: 5 ShuffleBytes-InsertRepeatedBytes-EraseBytes-CrossOver-InsertRepeatedBytes- 00:09:31.271 [2024-10-09 01:47:00.694455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.271 [2024-10-09 01:47:00.694526] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.271 NEW_FUNC[1/1]: 0x14f3628 in nvmf_tcp_req_set_cpl /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/tcp.c:2219 00:09:31.271 #13 NEW cov: 12190 ft: 12718 corp: 3/167b lim: 320 exec/s: 0 rss: 73Mb L: 83/83 MS: 1 CopyPart- 00:09:31.271 [2024-10-09 01:47:00.764453] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.271 [2024-10-09 01:47:00.764482] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.271 #14 NEW cov: 12196 ft: 13080 corp: 4/251b lim: 320 exec/s: 0 rss: 73Mb L: 84/84 MS: 1 InsertByte- 00:09:31.271 [2024-10-09 01:47:00.804497] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.271 [2024-10-09 01:47:00.804524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.271 #20 NEW cov: 12281 ft: 13373 corp: 5/335b lim: 320 exec/s: 0 rss: 74Mb L: 84/84 MS: 1 InsertByte- 00:09:31.271 [2024-10-09 01:47:00.844784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7f7f7f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.271 [2024-10-09 01:47:00.844811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.271 [2024-10-09 01:47:00.844873] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:31.271 [2024-10-09 01:47:00.844887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:31.271 NEW_FUNC[1/1]: 0x19252f8 in nvme_get_sgl_unkeyed /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_qpair.c:143 00:09:31.271 #22 NEW cov: 12303 ft: 14097 corp: 6/495b lim: 320 exec/s: 0 rss: 74Mb L: 160/160 MS: 2 EraseBytes-InsertRepeatedBytes- 00:09:31.271 [2024-10-09 01:47:00.884897] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:9090ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x7f7f7f7f7f7f7f7f 00:09:31.271 [2024-10-09 01:47:00.884926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.271 [2024-10-09 01:47:00.884983] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:31.271 [2024-10-09 01:47:00.885000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:31.271 #23 NEW cov: 12303 ft: 14157 corp: 7/655b lim: 320 exec/s: 0 rss: 74Mb L: 160/160 MS: 1 CopyPart- 00:09:31.530 [2024-10-09 01:47:00.944963] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ad) qid:0 cid:4 nsid:21212121 cdw10:21212121 cdw11:21212121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:31.530 [2024-10-09 01:47:00.944990] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.530 #28 NEW cov: 12313 ft: 14300 corp: 8/724b lim: 320 exec/s: 0 rss: 74Mb L: 69/160 MS: 5 ChangeBit-ChangeBit-InsertByte-ShuffleBytes-InsertRepeatedBytes- 00:09:31.530 [2024-10-09 01:47:00.985025] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.530 [2024-10-09 01:47:00.985052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.530 #29 NEW cov: 12313 ft: 14319 corp: 9/812b lim: 320 exec/s: 0 rss: 74Mb L: 88/160 MS: 1 CMP- DE: "\377\377\377\377"- 00:09:31.530 [2024-10-09 01:47:01.045204] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ad) qid:0 cid:4 nsid:21212121 cdw10:21212121 cdw11:21212121 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:31.530 [2024-10-09 01:47:01.045232] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.530 #30 NEW cov: 12313 ft: 14358 corp: 10/889b lim: 320 exec/s: 0 rss: 74Mb L: 77/160 MS: 1 CMP- DE: "\001'$\017Z}J "- 00:09:31.530 [2024-10-09 01:47:01.105355] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:31.530 [2024-10-09 01:47:01.105381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.530 #31 NEW cov: 12313 ft: 14384 corp: 11/969b lim: 320 exec/s: 0 rss: 74Mb L: 80/160 MS: 1 InsertRepeatedBytes- 00:09:31.530 [2024-10-09 01:47:01.145486] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.530 [2024-10-09 01:47:01.145514] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.530 #32 NEW cov: 12313 ft: 14426 corp: 12/1053b lim: 320 exec/s: 0 rss: 74Mb L: 84/160 MS: 1 CrossOver- 00:09:31.789 [2024-10-09 01:47:01.205715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff07ffffffff 00:09:31.789 [2024-10-09 01:47:01.205742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.789 [2024-10-09 01:47:01.205801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.789 [2024-10-09 01:47:01.205821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:31.789 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:31.789 #37 NEW cov: 12336 ft: 14496 corp: 13/1210b lim: 320 exec/s: 0 rss: 74Mb L: 157/160 MS: 5 ChangeByte-ChangeBinInt-InsertRepeatedBytes-ChangeBinInt-InsertRepeatedBytes- 00:09:31.789 [2024-10-09 01:47:01.245789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff07ffffffff 00:09:31.789 [2024-10-09 01:47:01.245820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.789 [2024-10-09 01:47:01.245877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.789 [2024-10-09 01:47:01.245891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:31.789 #38 NEW cov: 12336 ft: 14525 corp: 14/1367b lim: 320 exec/s: 0 rss: 74Mb L: 157/160 MS: 1 ChangeBinInt- 00:09:31.789 [2024-10-09 01:47:01.306063] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7f204a7d SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.789 [2024-10-09 01:47:01.306091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.789 [2024-10-09 01:47:01.306147] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:31.789 [2024-10-09 01:47:01.306160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:31.789 #39 NEW cov: 12336 ft: 14541 corp: 15/1527b lim: 320 exec/s: 0 rss: 74Mb L: 160/160 MS: 1 PersAutoDict- DE: "\001'$\017Z}J "- 00:09:31.789 [2024-10-09 01:47:01.346016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.790 [2024-10-09 01:47:01.346042] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.790 #40 NEW cov: 12336 ft: 14569 corp: 16/1611b lim: 320 exec/s: 40 rss: 74Mb L: 84/160 MS: 1 InsertByte- 00:09:31.790 [2024-10-09 01:47:01.406251] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:14242700 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff07ffffffff 00:09:31.790 [2024-10-09 01:47:01.406282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:31.790 [2024-10-09 01:47:01.406339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:0001ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:31.790 [2024-10-09 01:47:01.406354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:31.790 #41 NEW cov: 12336 ft: 14590 corp: 17/1776b lim: 320 exec/s: 41 rss: 74Mb L: 165/165 MS: 1 CMP- DE: "\000'$\024Yg;\370"- 00:09:32.049 [2024-10-09 01:47:01.466484] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff07ffffffff 00:09:32.049 [2024-10-09 01:47:01.466512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.049 [2024-10-09 01:47:01.466569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.049 [2024-10-09 01:47:01.466582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.049 [2024-10-09 01:47:01.466638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0xe7e7e7e7e7e7e7e7 00:09:32.049 [2024-10-09 01:47:01.466652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:32.049 #42 NEW cov: 12336 ft: 14776 corp: 18/1971b lim: 320 exec/s: 42 rss: 74Mb L: 195/195 MS: 1 InsertRepeatedBytes- 00:09:32.049 [2024-10-09 01:47:01.506477] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7f7f7f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.049 [2024-10-09 01:47:01.506504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.049 #43 NEW cov: 12336 ft: 14781 corp: 19/2058b lim: 320 exec/s: 43 rss: 74Mb L: 87/195 MS: 1 EraseBytes- 00:09:32.049 [2024-10-09 01:47:01.546606] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.049 [2024-10-09 01:47:01.546632] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.049 #44 NEW cov: 12336 ft: 14842 corp: 20/2142b lim: 320 exec/s: 44 rss: 74Mb L: 84/195 MS: 1 ShuffleBytes- 00:09:32.049 [2024-10-09 01:47:01.606774] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.049 [2024-10-09 01:47:01.606800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.049 #45 NEW cov: 12336 ft: 14876 corp: 21/2226b lim: 320 exec/s: 45 rss: 74Mb L: 84/195 MS: 1 CMP- DE: "\377&$\017\250b\266d"- 00:09:32.049 [2024-10-09 01:47:01.647011] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:9090ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x7f7f7f7f7f7f7f7f 00:09:32.049 [2024-10-09 01:47:01.647037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.049 [2024-10-09 01:47:01.647096] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:32.049 [2024-10-09 01:47:01.647111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.049 #46 NEW cov: 12336 ft: 14907 corp: 22/2394b lim: 320 exec/s: 46 rss: 74Mb L: 168/195 MS: 1 PersAutoDict- DE: "\000'$\024Yg;\370"- 00:09:32.049 [2024-10-09 01:47:01.707194] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7f7f7f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.049 [2024-10-09 01:47:01.707220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.049 [2024-10-09 01:47:01.707275] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:32.049 [2024-10-09 01:47:01.707289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.308 #47 NEW cov: 12336 ft: 14915 corp: 23/2554b lim: 320 exec/s: 47 rss: 74Mb L: 160/195 MS: 1 ChangeBinInt- 00:09:32.308 [2024-10-09 01:47:01.747172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ad) qid:0 cid:4 nsid:21212121 cdw10:21212121 cdw11:24270021 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:32.308 [2024-10-09 01:47:01.747197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.308 #48 NEW cov: 12336 ft: 14937 corp: 24/2623b lim: 320 exec/s: 48 rss: 74Mb L: 69/195 MS: 1 PersAutoDict- DE: "\000'$\024Yg;\370"- 00:09:32.308 [2024-10-09 01:47:01.787380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.308 [2024-10-09 01:47:01.787406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.308 [2024-10-09 01:47:01.787463] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffff00 cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.308 [2024-10-09 01:47:01.787477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.308 [2024-10-09 01:47:01.787535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:6 nsid:ffffffff cdw10:e7e7e7e7 cdw11:e7e7e7e7 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.308 [2024-10-09 01:47:01.787549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:32.308 #49 NEW cov: 12336 ft: 14940 corp: 25/2818b lim: 320 exec/s: 49 rss: 74Mb L: 195/195 MS: 1 CrossOver- 00:09:32.308 [2024-10-09 01:47:01.847568] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:7f7f7f7f SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.308 [2024-10-09 01:47:01.847594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.308 [2024-10-09 01:47:01.847652] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:32.308 [2024-10-09 01:47:01.847667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.308 #50 NEW cov: 12336 ft: 14945 corp: 26/2956b lim: 320 exec/s: 50 rss: 74Mb L: 138/195 MS: 1 EraseBytes- 00:09:32.308 [2024-10-09 01:47:01.887461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.308 [2024-10-09 01:47:01.887487] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.308 #51 NEW cov: 12336 ft: 14958 corp: 27/3039b lim: 320 exec/s: 51 rss: 74Mb L: 83/195 MS: 1 CrossOver- 00:09:32.308 [2024-10-09 01:47:01.927666] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.308 [2024-10-09 01:47:01.927693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.308 #52 NEW cov: 12336 ft: 14968 corp: 28/3123b lim: 320 exec/s: 52 rss: 74Mb L: 84/195 MS: 1 ChangeByte- 00:09:32.308 [2024-10-09 01:47:01.967877] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:32.308 [2024-10-09 01:47:01.967904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.308 [2024-10-09 01:47:01.967954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:32.308 [2024-10-09 01:47:01.967968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.567 #53 NEW cov: 12338 ft: 14978 corp: 29/3278b lim: 320 exec/s: 53 rss: 75Mb L: 155/195 MS: 1 InsertRepeatedBytes- 00:09:32.567 [2024-10-09 01:47:02.027954] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xb662a80f2426ffff 00:09:32.567 [2024-10-09 01:47:02.027981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.567 #54 NEW cov: 12338 ft: 14983 corp: 30/3362b lim: 320 exec/s: 54 rss: 75Mb L: 84/195 MS: 1 PersAutoDict- DE: "\377&$\017\250b\266d"- 00:09:32.567 [2024-10-09 01:47:02.088055] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.567 [2024-10-09 01:47:02.088081] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.567 #55 NEW cov: 12338 ft: 15058 corp: 31/3451b lim: 320 exec/s: 55 rss: 75Mb L: 89/195 MS: 1 InsertByte- 00:09:32.567 [2024-10-09 01:47:02.148274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff07ffffffff 00:09:32.567 [2024-10-09 01:47:02.148300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.567 [2024-10-09 01:47:02.148358] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.567 [2024-10-09 01:47:02.148373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.567 #56 NEW cov: 12338 ft: 15074 corp: 32/3608b lim: 320 exec/s: 56 rss: 75Mb L: 157/195 MS: 1 ShuffleBytes- 00:09:32.567 [2024-10-09 01:47:02.188540] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:9090ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x7f7f7f7f7f7f7f7f 00:09:32.567 [2024-10-09 01:47:02.188565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.567 [2024-10-09 01:47:02.188624] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:7f7f7f7f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:32.567 [2024-10-09 01:47:02.188638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.567 #57 NEW cov: 12338 ft: 15088 corp: 33/3768b lim: 320 exec/s: 57 rss: 75Mb L: 160/195 MS: 1 ChangeByte- 00:09:32.567 [2024-10-09 01:47:02.228429] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffffffffffffff 00:09:32.567 [2024-10-09 01:47:02.228456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.827 #58 NEW cov: 12338 ft: 15125 corp: 34/3860b lim: 320 exec/s: 58 rss: 75Mb L: 92/195 MS: 1 CopyPart- 00:09:32.827 [2024-10-09 01:47:02.288695] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:14242700 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff07ffffffff 00:09:32.827 [2024-10-09 01:47:02.288722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.827 [2024-10-09 01:47:02.288785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:5 nsid:ffffffff cdw10:01ffffff cdw11:ffffff00 SGL TRANSPORT DATA BLOCK TRANSPORT 0xffff31ffffffffff 00:09:32.827 [2024-10-09 01:47:02.288799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:32.827 #59 NEW cov: 12338 ft: 15128 corp: 35/4026b lim: 320 exec/s: 59 rss: 75Mb L: 166/195 MS: 1 InsertByte- 00:09:32.827 [2024-10-09 01:47:02.348789] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: ADMIN COMMAND (ff) qid:0 cid:4 nsid:ffffffff cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0xffffff07ffffffff 00:09:32.827 [2024-10-09 01:47:02.348821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:32.827 #60 NEW cov: 12338 ft: 15143 corp: 36/4149b lim: 320 exec/s: 30 rss: 75Mb L: 123/195 MS: 1 EraseBytes- 00:09:32.827 #60 DONE cov: 12338 ft: 15143 corp: 36/4149b lim: 320 exec/s: 30 rss: 75Mb 00:09:32.827 ###### Recommended dictionary. ###### 00:09:32.827 "\377\377\377\377" # Uses: 0 00:09:32.827 "\001'$\017Z}J " # Uses: 1 00:09:32.827 "\000'$\024Yg;\370" # Uses: 2 00:09:32.827 "\377&$\017\250b\266d" # Uses: 1 00:09:32.827 ###### End of recommended dictionary. ###### 00:09:32.827 Done 60 runs in 2 second(s) 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_0.conf /var/tmp/suppress_nvmf_fuzz 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=1 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_1.conf 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:32.827 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 1 00:09:33.086 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4401 00:09:33.086 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:09:33.086 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' 00:09:33.086 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4401"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:33.086 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:33.086 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:33.086 01:47:02 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4401' -c /tmp/fuzz_json_1.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 -Z 1 00:09:33.086 [2024-10-09 01:47:02.533723] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:33.086 [2024-10-09 01:47:02.533799] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040699 ] 00:09:33.086 [2024-10-09 01:47:02.736909] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.345 [2024-10-09 01:47:02.776608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.345 [2024-10-09 01:47:02.835678] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.345 [2024-10-09 01:47:02.851889] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4401 *** 00:09:33.345 INFO: Running with entropic power schedule (0xFF, 100). 00:09:33.345 INFO: Seed: 567120327 00:09:33.345 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:33.345 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:33.345 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_1 00:09:33.345 INFO: A corpus is not provided, starting from an empty corpus 00:09:33.345 #2 INITED exec/s: 0 rss: 67Mb 00:09:33.345 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:33.345 This may also happen if the target rejected all inputs we tried so far 00:09:33.345 [2024-10-09 01:47:02.900285] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.345 [2024-10-09 01:47:02.900406] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.345 [2024-10-09 01:47:02.900509] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.345 [2024-10-09 01:47:02.900613] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.345 [2024-10-09 01:47:02.900826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.345 [2024-10-09 01:47:02.900859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:33.345 [2024-10-09 01:47:02.900915] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.345 [2024-10-09 01:47:02.900930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:33.345 [2024-10-09 01:47:02.900985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.345 [2024-10-09 01:47:02.901015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:33.345 [2024-10-09 01:47:02.901071] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.345 [2024-10-09 01:47:02.901085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:33.604 NEW_FUNC[1/715]: 0x43c4c8 in fuzz_admin_get_log_page_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:67 00:09:33.604 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:33.604 #6 NEW cov: 12095 ft: 12109 corp: 2/26b lim: 30 exec/s: 0 rss: 74Mb L: 25/25 MS: 4 ChangeByte-ChangeBit-InsertByte-InsertRepeatedBytes- 00:09:33.604 [2024-10-09 01:47:03.221245] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.604 [2024-10-09 01:47:03.221378] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.604 [2024-10-09 01:47:03.221489] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.604 [2024-10-09 01:47:03.221592] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.604 [2024-10-09 01:47:03.221845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.604 [2024-10-09 01:47:03.221890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:33.604 [2024-10-09 01:47:03.221959] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.604 [2024-10-09 01:47:03.221984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:33.604 [2024-10-09 01:47:03.222051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.604 [2024-10-09 01:47:03.222071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:33.604 [2024-10-09 01:47:03.222139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.604 [2024-10-09 01:47:03.222159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:33.604 #7 NEW cov: 12230 ft: 12635 corp: 3/51b lim: 30 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ChangeBit- 00:09:33.863 [2024-10-09 01:47:03.281155] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.281279] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.281493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.281523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:33.863 [2024-10-09 01:47:03.281580] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.281596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:33.863 #9 NEW cov: 12236 ft: 13471 corp: 4/65b lim: 30 exec/s: 0 rss: 74Mb L: 14/25 MS: 2 InsertByte-InsertRepeatedBytes- 00:09:33.863 [2024-10-09 01:47:03.321230] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000dbdb 00:09:33.863 [2024-10-09 01:47:03.321445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8adb83db cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.321471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:33.863 #11 NEW cov: 12321 ft: 14110 corp: 5/75b lim: 30 exec/s: 0 rss: 74Mb L: 10/25 MS: 2 ChangeBit-InsertRepeatedBytes- 00:09:33.863 [2024-10-09 01:47:03.361459] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.361583] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.361691] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.361800] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.362039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.362068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:33.863 [2024-10-09 01:47:03.362125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.362140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:33.863 [2024-10-09 01:47:03.362194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.362210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:33.863 [2024-10-09 01:47:03.362269] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.362285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:33.863 #12 NEW cov: 12321 ft: 14244 corp: 6/100b lim: 30 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:09:33.863 [2024-10-09 01:47:03.421525] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111696) > buf size (4096) 00:09:33.863 [2024-10-09 01:47:03.421751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d130056 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.421778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:33.863 #14 NEW cov: 12344 ft: 14387 corp: 7/109b lim: 30 exec/s: 0 rss: 74Mb L: 9/25 MS: 2 CopyPart-CMP- DE: "m\023V\264\025$'\000"- 00:09:33.863 [2024-10-09 01:47:03.461642] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.461765] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.461992] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.462021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:33.863 [2024-10-09 01:47:03.462078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.462094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:33.863 #15 NEW cov: 12344 ft: 14465 corp: 8/123b lim: 30 exec/s: 0 rss: 74Mb L: 14/25 MS: 1 ChangeBinInt- 00:09:33.863 [2024-10-09 01:47:03.521870] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.521986] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:33.863 [2024-10-09 01:47:03.522098] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff7e 00:09:33.863 [2024-10-09 01:47:03.522314] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.522343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:33.863 [2024-10-09 01:47:03.522401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.522416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:33.863 [2024-10-09 01:47:03.522473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:33.863 [2024-10-09 01:47:03.522488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.122 #16 NEW cov: 12344 ft: 14766 corp: 9/142b lim: 30 exec/s: 0 rss: 74Mb L: 19/25 MS: 1 InsertRepeatedBytes- 00:09:34.122 [2024-10-09 01:47:03.581986] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111696) > buf size (4096) 00:09:34.122 [2024-10-09 01:47:03.582208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d130056 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.582234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.122 #17 NEW cov: 12344 ft: 14816 corp: 10/152b lim: 30 exec/s: 0 rss: 74Mb L: 10/25 MS: 1 InsertByte- 00:09:34.122 [2024-10-09 01:47:03.642259] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.122 [2024-10-09 01:47:03.642378] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.122 [2024-10-09 01:47:03.642489] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff2a 00:09:34.122 [2024-10-09 01:47:03.642599] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.122 [2024-10-09 01:47:03.642826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.642854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.122 [2024-10-09 01:47:03.642912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.642928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.122 [2024-10-09 01:47:03.642984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.642999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.122 [2024-10-09 01:47:03.643055] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.643069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:34.122 #18 NEW cov: 12344 ft: 14911 corp: 11/178b lim: 30 exec/s: 0 rss: 75Mb L: 26/26 MS: 1 InsertByte- 00:09:34.122 [2024-10-09 01:47:03.702400] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.122 [2024-10-09 01:47:03.702521] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.122 [2024-10-09 01:47:03.702629] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.122 [2024-10-09 01:47:03.702743] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.122 [2024-10-09 01:47:03.702972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.703002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.122 [2024-10-09 01:47:03.703060] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.703076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.122 [2024-10-09 01:47:03.703133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.703149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.122 [2024-10-09 01:47:03.703206] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.703221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:34.122 #19 NEW cov: 12344 ft: 14948 corp: 12/203b lim: 30 exec/s: 0 rss: 75Mb L: 25/26 MS: 1 ShuffleBytes- 00:09:34.122 [2024-10-09 01:47:03.742417] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.122 [2024-10-09 01:47:03.742632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8adb83db cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.122 [2024-10-09 01:47:03.742662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.122 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:34.122 #20 NEW cov: 12367 ft: 15018 corp: 13/213b lim: 30 exec/s: 0 rss: 75Mb L: 10/26 MS: 1 CMP- DE: "\377\377\377\377"- 00:09:34.381 [2024-10-09 01:47:03.802587] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.381 [2024-10-09 01:47:03.802709] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.381 [2024-10-09 01:47:03.802932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.381 [2024-10-09 01:47:03.802959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.381 [2024-10-09 01:47:03.803016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.381 [2024-10-09 01:47:03.803032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.381 #21 NEW cov: 12367 ft: 15077 corp: 14/227b lim: 30 exec/s: 0 rss: 75Mb L: 14/26 MS: 1 ShuffleBytes- 00:09:34.381 [2024-10-09 01:47:03.842691] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.381 [2024-10-09 01:47:03.842821] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.381 [2024-10-09 01:47:03.843046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.381 [2024-10-09 01:47:03.843074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.381 [2024-10-09 01:47:03.843130] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.381 [2024-10-09 01:47:03.843146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.381 #22 NEW cov: 12367 ft: 15114 corp: 15/241b lim: 30 exec/s: 22 rss: 75Mb L: 14/26 MS: 1 ChangeByte- 00:09:34.381 [2024-10-09 01:47:03.902974] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.381 [2024-10-09 01:47:03.903101] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.381 [2024-10-09 01:47:03.903215] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.381 [2024-10-09 01:47:03.903321] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000dbdb 00:09:34.381 [2024-10-09 01:47:03.903540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8adb83db cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.381 [2024-10-09 01:47:03.903568] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.381 [2024-10-09 01:47:03.903629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.381 [2024-10-09 01:47:03.903645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.381 [2024-10-09 01:47:03.903703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.381 [2024-10-09 01:47:03.903724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.382 [2024-10-09 01:47:03.903783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.382 [2024-10-09 01:47:03.903807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:34.382 #23 NEW cov: 12367 ft: 15161 corp: 16/268b lim: 30 exec/s: 23 rss: 75Mb L: 27/27 MS: 1 CrossOver- 00:09:34.382 [2024-10-09 01:47:03.963153] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.382 [2024-10-09 01:47:03.963280] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.382 [2024-10-09 01:47:03.963503] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.382 [2024-10-09 01:47:03.963534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.382 [2024-10-09 01:47:03.963599] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.382 [2024-10-09 01:47:03.963616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.382 #24 NEW cov: 12367 ft: 15178 corp: 17/282b lim: 30 exec/s: 24 rss: 75Mb L: 14/27 MS: 1 CMP- DE: "\377\377"- 00:09:34.382 [2024-10-09 01:47:04.023298] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f7f7 00:09:34.382 [2024-10-09 01:47:04.023418] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.382 [2024-10-09 01:47:04.023530] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.382 [2024-10-09 01:47:04.023634] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300007e0a 00:09:34.382 [2024-10-09 01:47:04.023855] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:f7f783f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.382 [2024-10-09 01:47:04.023883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.382 [2024-10-09 01:47:04.023942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:f7f783f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.382 [2024-10-09 01:47:04.023957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.382 [2024-10-09 01:47:04.024012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.382 [2024-10-09 01:47:04.024027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.382 [2024-10-09 01:47:04.024083] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:0eff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.382 [2024-10-09 01:47:04.024098] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:34.382 #25 NEW cov: 12367 ft: 15222 corp: 18/306b lim: 30 exec/s: 25 rss: 75Mb L: 24/27 MS: 1 InsertRepeatedBytes- 00:09:34.640 [2024-10-09 01:47:04.063361] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.640 [2024-10-09 01:47:04.063480] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.640 [2024-10-09 01:47:04.063587] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff7e 00:09:34.640 [2024-10-09 01:47:04.063797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.640 [2024-10-09 01:47:04.063828] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.640 [2024-10-09 01:47:04.063887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.640 [2024-10-09 01:47:04.063905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.640 [2024-10-09 01:47:04.063961] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.640 [2024-10-09 01:47:04.063976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.640 #26 NEW cov: 12367 ft: 15252 corp: 19/325b lim: 30 exec/s: 26 rss: 75Mb L: 19/27 MS: 1 PersAutoDict- DE: "\377\377"- 00:09:34.640 [2024-10-09 01:47:04.123593] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x100000002 00:09:34.640 [2024-10-09 01:47:04.123715] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.640 [2024-10-09 01:47:04.123832] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff2a 00:09:34.640 [2024-10-09 01:47:04.123943] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.641 [2024-10-09 01:47:04.124163] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:3cff81ff cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.124189] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.641 [2024-10-09 01:47:04.124245] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00ff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.124260] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.641 [2024-10-09 01:47:04.124316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.124330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.641 [2024-10-09 01:47:04.124386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.124400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:34.641 #27 NEW cov: 12367 ft: 15262 corp: 20/351b lim: 30 exec/s: 27 rss: 75Mb L: 26/27 MS: 1 CMP- DE: "\001\000\002\000"- 00:09:34.641 [2024-10-09 01:47:04.183636] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111696) > buf size (4096) 00:09:34.641 [2024-10-09 01:47:04.183853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d130056 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.183878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.641 #28 NEW cov: 12367 ft: 15298 corp: 21/360b lim: 30 exec/s: 28 rss: 75Mb L: 9/27 MS: 1 PersAutoDict- DE: "m\023V\264\025$'\000"- 00:09:34.641 [2024-10-09 01:47:04.223831] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.641 [2024-10-09 01:47:04.223952] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.641 [2024-10-09 01:47:04.224061] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff7e 00:09:34.641 [2024-10-09 01:47:04.224277] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.224304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.641 [2024-10-09 01:47:04.224362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.224380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.641 [2024-10-09 01:47:04.224435] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83df cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.224451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.641 #29 NEW cov: 12367 ft: 15308 corp: 22/379b lim: 30 exec/s: 29 rss: 75Mb L: 19/27 MS: 1 ChangeBit- 00:09:34.641 [2024-10-09 01:47:04.283925] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.641 [2024-10-09 01:47:04.284043] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.641 [2024-10-09 01:47:04.284249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.284276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.641 [2024-10-09 01:47:04.284333] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff830e cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.641 [2024-10-09 01:47:04.284349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.641 #30 NEW cov: 12367 ft: 15319 corp: 23/393b lim: 30 exec/s: 30 rss: 75Mb L: 14/27 MS: 1 ChangeBit- 00:09:34.899 [2024-10-09 01:47:04.324071] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (261124) > buf size (4096) 00:09:34.899 [2024-10-09 01:47:04.324192] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.899 [2024-10-09 01:47:04.324298] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x300000eff 00:09:34.899 [2024-10-09 01:47:04.324516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ff000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.899 [2024-10-09 01:47:04.324544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.899 [2024-10-09 01:47:04.324603] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00008310 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.899 [2024-10-09 01:47:04.324618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.899 [2024-10-09 01:47:04.324676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.324692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.900 #31 NEW cov: 12367 ft: 15333 corp: 24/415b lim: 30 exec/s: 31 rss: 75Mb L: 22/27 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\020"- 00:09:34.900 [2024-10-09 01:47:04.364163] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.900 [2024-10-09 01:47:04.364478] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.364505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.900 [2024-10-09 01:47:04.364562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.364579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.900 #32 NEW cov: 12384 ft: 15434 corp: 25/429b lim: 30 exec/s: 32 rss: 75Mb L: 14/27 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:09:34.900 [2024-10-09 01:47:04.404292] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (19804) > buf size (4096) 00:09:34.900 [2024-10-09 01:47:04.404522] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:1356006d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.404549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.900 #33 NEW cov: 12384 ft: 15490 corp: 26/438b lim: 30 exec/s: 33 rss: 75Mb L: 9/27 MS: 1 ShuffleBytes- 00:09:34.900 [2024-10-09 01:47:04.444466] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.900 [2024-10-09 01:47:04.444591] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.900 [2024-10-09 01:47:04.444705] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.900 [2024-10-09 01:47:04.444932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.444959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.900 [2024-10-09 01:47:04.445019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.445036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.900 [2024-10-09 01:47:04.445091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.445106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.900 #34 NEW cov: 12384 ft: 15499 corp: 27/460b lim: 30 exec/s: 34 rss: 75Mb L: 22/27 MS: 1 InsertRepeatedBytes- 00:09:34.900 [2024-10-09 01:47:04.484489] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000dbff 00:09:34.900 [2024-10-09 01:47:04.484721] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8aff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.484748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.900 #35 NEW cov: 12384 ft: 15509 corp: 28/470b lim: 30 exec/s: 35 rss: 75Mb L: 10/27 MS: 1 ShuffleBytes- 00:09:34.900 [2024-10-09 01:47:04.524705] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.900 [2024-10-09 01:47:04.524830] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.900 [2024-10-09 01:47:04.524943] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:34.900 [2024-10-09 01:47:04.525053] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000dbdb 00:09:34.900 [2024-10-09 01:47:04.525287] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8adb83db cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.525314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:34.900 [2024-10-09 01:47:04.525371] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3cff837f cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.525386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:34.900 [2024-10-09 01:47:04.525443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.525459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:34.900 [2024-10-09 01:47:04.525518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:34.900 [2024-10-09 01:47:04.525534] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:35.158 #36 NEW cov: 12384 ft: 15521 corp: 29/497b lim: 30 exec/s: 36 rss: 75Mb L: 27/27 MS: 1 ShuffleBytes- 00:09:35.158 [2024-10-09 01:47:04.584746] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x152a 00:09:35.158 [2024-10-09 01:47:04.584974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d130056 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.158 [2024-10-09 01:47:04.585001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:35.158 [2024-10-09 01:47:04.624893] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x20000b415 00:09:35.158 [2024-10-09 01:47:04.625116] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d2b0213 cdw11:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.158 [2024-10-09 01:47:04.625152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:35.158 #38 NEW cov: 12384 ft: 15563 corp: 30/508b lim: 30 exec/s: 38 rss: 75Mb L: 11/27 MS: 2 InsertByte-InsertByte- 00:09:35.158 [2024-10-09 01:47:04.665030] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:35.158 [2024-10-09 01:47:04.665151] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:35.158 [2024-10-09 01:47:04.665368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff8327 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.158 [2024-10-09 01:47:04.665395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:35.158 [2024-10-09 01:47:04.665453] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.159 [2024-10-09 01:47:04.665469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:35.159 #39 NEW cov: 12384 ft: 15580 corp: 31/523b lim: 30 exec/s: 39 rss: 75Mb L: 15/27 MS: 1 InsertByte- 00:09:35.159 [2024-10-09 01:47:04.705188] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (111792) > buf size (4096) 00:09:35.159 [2024-10-09 01:47:04.705309] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x1356 00:09:35.159 [2024-10-09 01:47:04.705424] ctrlr.c:2667:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: len (184408) > buf size (4096) 00:09:35.159 [2024-10-09 01:47:04.705650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:6d2b0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.159 [2024-10-09 01:47:04.705678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:35.159 [2024-10-09 01:47:04.705737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.159 [2024-10-09 01:47:04.705753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:35.159 [2024-10-09 01:47:04.705810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:b415002a cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.159 [2024-10-09 01:47:04.705831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:35.159 #40 NEW cov: 12384 ft: 15618 corp: 32/542b lim: 30 exec/s: 40 rss: 76Mb L: 19/27 MS: 1 PersAutoDict- DE: "\000\000\000\000\000\000\000\020"- 00:09:35.159 [2024-10-09 01:47:04.765343] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000f761 00:09:35.159 [2024-10-09 01:47:04.765470] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x10000f7f7 00:09:35.159 [2024-10-09 01:47:04.765581] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:35.159 [2024-10-09 01:47:04.765692] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ff0e 00:09:35.159 [2024-10-09 01:47:04.765917] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:f7f783f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.159 [2024-10-09 01:47:04.765945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:35.159 [2024-10-09 01:47:04.766004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:61618161 cdw11:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.159 [2024-10-09 01:47:04.766020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:35.159 [2024-10-09 01:47:04.766078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:f7f783f7 cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.159 [2024-10-09 01:47:04.766093] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:35.159 [2024-10-09 01:47:04.766150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.159 [2024-10-09 01:47:04.766164] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:35.159 #41 NEW cov: 12384 ft: 15645 corp: 33/571b lim: 30 exec/s: 41 rss: 76Mb L: 29/29 MS: 1 InsertRepeatedBytes- 00:09:35.418 [2024-10-09 01:47:04.825547] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:35.418 [2024-10-09 01:47:04.825668] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:35.418 [2024-10-09 01:47:04.825776] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:35.418 [2024-10-09 01:47:04.825896] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000dbdb 00:09:35.418 [2024-10-09 01:47:04.826121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:8adb83db cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.418 [2024-10-09 01:47:04.826149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:35.418 [2024-10-09 01:47:04.826207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:3cff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.418 [2024-10-09 01:47:04.826224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:35.418 [2024-10-09 01:47:04.826283] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.418 [2024-10-09 01:47:04.826298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:35.418 [2024-10-09 01:47:04.826356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.418 [2024-10-09 01:47:04.826372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:35.418 #42 NEW cov: 12384 ft: 15659 corp: 34/598b lim: 30 exec/s: 42 rss: 76Mb L: 27/29 MS: 1 CopyPart- 00:09:35.418 [2024-10-09 01:47:04.865581] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:35.418 [2024-10-09 01:47:04.865703] ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x30000ffff 00:09:35.418 [2024-10-09 01:47:04.865925] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.418 [2024-10-09 01:47:04.865955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:35.418 [2024-10-09 01:47:04.866014] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:ffff83ff cdw11:00000003 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.418 [2024-10-09 01:47:04.866030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:35.418 #43 NEW cov: 12384 ft: 15665 corp: 35/612b lim: 30 exec/s: 21 rss: 76Mb L: 14/29 MS: 1 CopyPart- 00:09:35.418 #43 DONE cov: 12384 ft: 15665 corp: 35/612b lim: 30 exec/s: 21 rss: 76Mb 00:09:35.418 ###### Recommended dictionary. ###### 00:09:35.418 "m\023V\264\025$'\000" # Uses: 1 00:09:35.418 "\377\377\377\377" # Uses: 0 00:09:35.418 "\377\377" # Uses: 1 00:09:35.418 "\001\000\002\000" # Uses: 0 00:09:35.418 "\000\000\000\000\000\000\000\020" # Uses: 2 00:09:35.418 ###### End of recommended dictionary. ###### 00:09:35.418 Done 43 runs in 2 second(s) 00:09:35.418 01:47:04 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_1.conf /var/tmp/suppress_nvmf_fuzz 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=2 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_2.conf 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 2 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4402 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4402"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:35.418 01:47:05 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4402' -c /tmp/fuzz_json_2.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 -Z 2 00:09:35.418 [2024-10-09 01:47:05.051305] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:35.418 [2024-10-09 01:47:05.051374] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4040973 ] 00:09:35.676 [2024-10-09 01:47:05.248660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.676 [2024-10-09 01:47:05.287680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.934 [2024-10-09 01:47:05.346976] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.934 [2024-10-09 01:47:05.363164] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4402 *** 00:09:35.934 INFO: Running with entropic power schedule (0xFF, 100). 00:09:35.934 INFO: Seed: 3079119477 00:09:35.934 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:35.934 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:35.934 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_2 00:09:35.934 INFO: A corpus is not provided, starting from an empty corpus 00:09:35.934 #2 INITED exec/s: 0 rss: 66Mb 00:09:35.934 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:35.934 This may also happen if the target rejected all inputs we tried so far 00:09:35.934 [2024-10-09 01:47:05.412576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.934 [2024-10-09 01:47:05.412606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:35.934 [2024-10-09 01:47:05.412659] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.934 [2024-10-09 01:47:05.412673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:35.934 [2024-10-09 01:47:05.412725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:35.934 [2024-10-09 01:47:05.412739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.193 NEW_FUNC[1/714]: 0x43ef78 in fuzz_admin_identify_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:95 00:09:36.193 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:36.193 #9 NEW cov: 12073 ft: 12071 corp: 2/25b lim: 35 exec/s: 0 rss: 73Mb L: 24/24 MS: 2 CrossOver-InsertRepeatedBytes- 00:09:36.193 [2024-10-09 01:47:05.753574] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:60002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.753612] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.193 [2024-10-09 01:47:05.753671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.753687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.193 [2024-10-09 01:47:05.753743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.753758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.193 #10 NEW cov: 12186 ft: 12540 corp: 3/50b lim: 35 exec/s: 0 rss: 73Mb L: 25/25 MS: 1 InsertByte- 00:09:36.193 [2024-10-09 01:47:05.813929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.813958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.193 [2024-10-09 01:47:05.814016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.814031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.193 [2024-10-09 01:47:05.814089] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:d5d500d5 cdw11:d500d5d5 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.814108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.193 [2024-10-09 01:47:05.814164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:d5d500d5 cdw11:2800d528 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.814178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:36.193 [2024-10-09 01:47:05.814238] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.814252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:36.193 #11 NEW cov: 12192 ft: 13479 corp: 4/85b lim: 35 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:09:36.193 [2024-10-09 01:47:05.853739] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.853767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.193 [2024-10-09 01:47:05.853831] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.853845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.193 [2024-10-09 01:47:05.853903] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.193 [2024-10-09 01:47:05.853918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.452 #12 NEW cov: 12277 ft: 13807 corp: 5/109b lim: 35 exec/s: 0 rss: 73Mb L: 24/35 MS: 1 ChangeByte- 00:09:36.452 [2024-10-09 01:47:05.893473] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.452 [2024-10-09 01:47:05.893602] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.452 [2024-10-09 01:47:05.893711] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.452 [2024-10-09 01:47:05.893826] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.452 [2024-10-09 01:47:05.894043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:05.894073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:05.894132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:05.894149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:05.894207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:05.894224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:05.894280] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:05.894297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:36.452 #17 NEW cov: 12288 ft: 13936 corp: 6/139b lim: 35 exec/s: 0 rss: 73Mb L: 30/35 MS: 5 CrossOver-ChangeBinInt-ChangeBit-EraseBytes-InsertRepeatedBytes- 00:09:36.452 [2024-10-09 01:47:05.954016] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f0028d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:05.954044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:05.954104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d7d700d7 cdw11:2800d7d9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:05.954118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:05.954177] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:05.954192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.452 #18 NEW cov: 12288 ft: 14050 corp: 7/164b lim: 35 exec/s: 0 rss: 74Mb L: 25/35 MS: 1 ChangeBinInt- 00:09:36.452 [2024-10-09 01:47:06.014158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f0028d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.014185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:06.014243] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d7d700d7 cdw11:2800d7d9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.014258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:06.014316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.014332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.452 #19 NEW cov: 12288 ft: 14097 corp: 8/189b lim: 35 exec/s: 0 rss: 74Mb L: 25/35 MS: 1 ChangeBit- 00:09:36.452 [2024-10-09 01:47:06.074561] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:ecec00ec cdw11:ec00ecec SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.074588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:06.074646] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:ecec00ec cdw11:28000a0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.074661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:06.074716] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:60280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.074732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:06.074786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.074801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:06.074860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.074874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:36.452 #20 NEW cov: 12288 ft: 14121 corp: 9/224b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:09:36.452 [2024-10-09 01:47:06.114439] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.114466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:06.114525] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.114541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.452 [2024-10-09 01:47:06.114596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:25002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.452 [2024-10-09 01:47:06.114611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.711 #21 NEW cov: 12288 ft: 14161 corp: 10/251b lim: 35 exec/s: 0 rss: 74Mb L: 27/35 MS: 1 InsertRepeatedBytes- 00:09:36.711 [2024-10-09 01:47:06.154521] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f0028d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.711 [2024-10-09 01:47:06.154548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.711 [2024-10-09 01:47:06.154607] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d7d700d7 cdw11:2800d7d9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.711 [2024-10-09 01:47:06.154621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.711 [2024-10-09 01:47:06.154674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.711 [2024-10-09 01:47:06.154689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.711 #22 NEW cov: 12288 ft: 14189 corp: 11/272b lim: 35 exec/s: 0 rss: 74Mb L: 21/35 MS: 1 EraseBytes- 00:09:36.711 [2024-10-09 01:47:06.194644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f0028d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.711 [2024-10-09 01:47:06.194671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.711 [2024-10-09 01:47:06.194728] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:282800d7 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.711 [2024-10-09 01:47:06.194743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.711 [2024-10-09 01:47:06.194800] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.711 [2024-10-09 01:47:06.194818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.712 #23 NEW cov: 12288 ft: 14213 corp: 12/293b lim: 35 exec/s: 0 rss: 74Mb L: 21/35 MS: 1 CrossOver- 00:09:36.712 [2024-10-09 01:47:06.254650] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:0a002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.712 [2024-10-09 01:47:06.254677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.712 [2024-10-09 01:47:06.254737] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28d80028 cdw11:d7009fd7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.712 [2024-10-09 01:47:06.254753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.712 #24 NEW cov: 12288 ft: 14496 corp: 13/311b lim: 35 exec/s: 0 rss: 74Mb L: 18/35 MS: 1 CrossOver- 00:09:36.712 [2024-10-09 01:47:06.294916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f002898 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.712 [2024-10-09 01:47:06.294943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.712 [2024-10-09 01:47:06.295000] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:282800d7 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.712 [2024-10-09 01:47:06.295015] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.712 [2024-10-09 01:47:06.295073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.712 [2024-10-09 01:47:06.295088] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.712 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:36.712 #25 NEW cov: 12311 ft: 14562 corp: 14/332b lim: 35 exec/s: 0 rss: 74Mb L: 21/35 MS: 1 ChangeBit- 00:09:36.712 [2024-10-09 01:47:06.355100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f002898 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.712 [2024-10-09 01:47:06.355128] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.712 [2024-10-09 01:47:06.355186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:280000d7 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.712 [2024-10-09 01:47:06.355202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.712 [2024-10-09 01:47:06.355261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.712 [2024-10-09 01:47:06.355277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.971 #26 NEW cov: 12311 ft: 14574 corp: 15/353b lim: 35 exec/s: 26 rss: 74Mb L: 21/35 MS: 1 ChangeByte- 00:09:36.971 [2024-10-09 01:47:06.415165] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:28002800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.415195] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.415258] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.415275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.971 #27 NEW cov: 12311 ft: 14644 corp: 16/369b lim: 35 exec/s: 27 rss: 74Mb L: 16/35 MS: 1 EraseBytes- 00:09:36.971 [2024-10-09 01:47:06.475174] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.971 [2024-10-09 01:47:06.475303] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.971 [2024-10-09 01:47:06.475421] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.971 [2024-10-09 01:47:06.475540] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:36.971 [2024-10-09 01:47:06.475767] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.475797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.475861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.475885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.475944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.475962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.476026] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.476045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:36.971 #28 NEW cov: 12311 ft: 14698 corp: 17/403b lim: 35 exec/s: 28 rss: 74Mb L: 34/35 MS: 1 CMP- DE: "\001\000\000\034"- 00:09:36.971 [2024-10-09 01:47:06.546012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:e100e1e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.546043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.546106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e1e100e1 cdw11:e100e1e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.546123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.546183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:9fd700d8 cdw11:d700d7d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.546199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.546260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:282800d9 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.546275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.546335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:28280028 cdw11:2c002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.546351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:36.971 #29 NEW cov: 12311 ft: 14780 corp: 18/438b lim: 35 exec/s: 29 rss: 74Mb L: 35/35 MS: 1 InsertRepeatedBytes- 00:09:36.971 [2024-10-09 01:47:06.605785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f0028d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.605817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.605879] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d7d700d7 cdw11:2800d7d9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.605907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:36.971 [2024-10-09 01:47:06.605967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:36.971 [2024-10-09 01:47:06.605981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:36.971 #30 NEW cov: 12311 ft: 14804 corp: 19/464b lim: 35 exec/s: 30 rss: 74Mb L: 26/35 MS: 1 InsertByte- 00:09:37.229 [2024-10-09 01:47:06.645926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.229 [2024-10-09 01:47:06.645958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.229 [2024-10-09 01:47:06.646018] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.229 [2024-10-09 01:47:06.646033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.229 [2024-10-09 01:47:06.646091] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:25002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.229 [2024-10-09 01:47:06.646106] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.229 #31 NEW cov: 12311 ft: 14840 corp: 20/491b lim: 35 exec/s: 31 rss: 74Mb L: 27/35 MS: 1 ShuffleBytes- 00:09:37.229 [2024-10-09 01:47:06.705932] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:0a002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.229 [2024-10-09 01:47:06.705959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.229 [2024-10-09 01:47:06.706017] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28d80028 cdw11:d7009fd7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.229 [2024-10-09 01:47:06.706032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.229 #32 NEW cov: 12311 ft: 14876 corp: 21/509b lim: 35 exec/s: 32 rss: 74Mb L: 18/35 MS: 1 ChangeBit- 00:09:37.229 [2024-10-09 01:47:06.766373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f0028d8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.229 [2024-10-09 01:47:06.766401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.229 [2024-10-09 01:47:06.766458] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d7d700d7 cdw11:2800d7d9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.766473] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.230 [2024-10-09 01:47:06.766530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:dfdf00df cdw11:df00dfdf SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.766545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.230 [2024-10-09 01:47:06.766602] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:282800df cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.766616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:37.230 #33 NEW cov: 12311 ft: 14930 corp: 22/539b lim: 35 exec/s: 33 rss: 74Mb L: 30/35 MS: 1 InsertRepeatedBytes- 00:09:37.230 [2024-10-09 01:47:06.806350] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0aa8000a cdw11:d8002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.806377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.230 [2024-10-09 01:47:06.806438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d7d700d7 cdw11:d900d7d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.806453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.230 [2024-10-09 01:47:06.806509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:2828001b cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.806529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.230 #34 NEW cov: 12311 ft: 14936 corp: 23/566b lim: 35 exec/s: 34 rss: 74Mb L: 27/35 MS: 1 InsertByte- 00:09:37.230 [2024-10-09 01:47:06.866176] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.230 [2024-10-09 01:47:06.866306] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.230 [2024-10-09 01:47:06.866422] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.230 [2024-10-09 01:47:06.866534] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.230 [2024-10-09 01:47:06.866759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.866789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.230 [2024-10-09 01:47:06.866851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.866870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.230 [2024-10-09 01:47:06.866928] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.866945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.230 [2024-10-09 01:47:06.867002] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.230 [2024-10-09 01:47:06.867020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:37.230 #40 NEW cov: 12311 ft: 14940 corp: 24/596b lim: 35 exec/s: 40 rss: 74Mb L: 30/35 MS: 1 ShuffleBytes- 00:09:37.487 [2024-10-09 01:47:06.906505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:0a002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.487 [2024-10-09 01:47:06.906533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.487 [2024-10-09 01:47:06.906593] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28000028 cdw11:d700d89f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.487 [2024-10-09 01:47:06.906608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.487 #41 NEW cov: 12311 ft: 14986 corp: 25/615b lim: 35 exec/s: 41 rss: 75Mb L: 19/35 MS: 1 InsertByte- 00:09:37.487 [2024-10-09 01:47:06.966552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8c8c008c cdw11:8c008c8c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.487 [2024-10-09 01:47:06.966579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.487 #42 NEW cov: 12311 ft: 15278 corp: 26/627b lim: 35 exec/s: 42 rss: 75Mb L: 12/35 MS: 1 InsertRepeatedBytes- 00:09:37.487 [2024-10-09 01:47:07.007164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:e100e1e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.487 [2024-10-09 01:47:07.007190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.487 [2024-10-09 01:47:07.007249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:e1e100e1 cdw11:e100e1e1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.487 [2024-10-09 01:47:07.007267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.487 [2024-10-09 01:47:07.007325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:9fd700d8 cdw11:d700ead7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.487 [2024-10-09 01:47:07.007341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.487 [2024-10-09 01:47:07.007399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:282800d9 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.488 [2024-10-09 01:47:07.007413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:37.488 [2024-10-09 01:47:07.007474] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:8 nsid:0 cdw10:28280028 cdw11:2c002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.488 [2024-10-09 01:47:07.007488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:37.488 #43 NEW cov: 12311 ft: 15289 corp: 27/662b lim: 35 exec/s: 43 rss: 75Mb L: 35/35 MS: 1 ChangeByte- 00:09:37.488 [2024-10-09 01:47:07.067139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.488 [2024-10-09 01:47:07.067166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.488 [2024-10-09 01:47:07.067224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.488 [2024-10-09 01:47:07.067240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.488 #44 NEW cov: 12311 ft: 15696 corp: 28/689b lim: 35 exec/s: 44 rss: 75Mb L: 27/35 MS: 1 PersAutoDict- DE: "\001\000\000\034"- 00:09:37.488 [2024-10-09 01:47:07.127006] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:8c8c008c cdw11:8c00918c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.488 [2024-10-09 01:47:07.127033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.746 #45 NEW cov: 12311 ft: 15757 corp: 29/701b lim: 35 exec/s: 45 rss: 75Mb L: 12/35 MS: 1 ChangeBinInt- 00:09:37.746 [2024-10-09 01:47:07.187058] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.746 [2024-10-09 01:47:07.187185] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.746 [2024-10-09 01:47:07.187301] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.746 [2024-10-09 01:47:07.187413] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.746 [2024-10-09 01:47:07.187635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.187665] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.187725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.187743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.187801] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.187822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.187882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.187902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:37.746 #46 NEW cov: 12311 ft: 15781 corp: 30/733b lim: 35 exec/s: 46 rss: 75Mb L: 32/35 MS: 1 CrossOver- 00:09:37.746 [2024-10-09 01:47:07.247622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:d70028d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.247648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.247708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:d8d7009f cdw11:2800d7d9 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.247724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.247781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.247797] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.746 #47 NEW cov: 12311 ft: 15791 corp: 31/758b lim: 35 exec/s: 47 rss: 75Mb L: 25/35 MS: 1 ShuffleBytes- 00:09:37.746 [2024-10-09 01:47:07.287710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.287738] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.287797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.287818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.287876] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:25002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.287891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.746 #48 NEW cov: 12311 ft: 15798 corp: 32/785b lim: 35 exec/s: 48 rss: 75Mb L: 27/35 MS: 1 ChangeByte- 00:09:37.746 [2024-10-09 01:47:07.327818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:0a28000a cdw11:9f002898 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.327844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.327906] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:28280028 cdw11:d70028d7 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.327921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.327977] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:28280028 cdw11:28002828 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.327992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.746 #49 NEW cov: 12311 ft: 15812 corp: 33/806b lim: 35 exec/s: 49 rss: 75Mb L: 21/35 MS: 1 ShuffleBytes- 00:09:37.746 [2024-10-09 01:47:07.367548] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.746 [2024-10-09 01:47:07.367674] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.746 [2024-10-09 01:47:07.367784] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.746 [2024-10-09 01:47:07.367910] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:09:37.746 [2024-10-09 01:47:07.368134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.368162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.368222] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:5 nsid:0 cdw10:00010000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.368240] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.368299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:6 nsid:0 cdw10:00050000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.368316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:37.746 [2024-10-09 01:47:07.368375] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:37.746 [2024-10-09 01:47:07.368391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:37.746 #50 NEW cov: 12311 ft: 15826 corp: 34/840b lim: 35 exec/s: 25 rss: 75Mb L: 34/35 MS: 1 ChangeBinInt- 00:09:37.746 #50 DONE cov: 12311 ft: 15826 corp: 34/840b lim: 35 exec/s: 25 rss: 75Mb 00:09:37.746 ###### Recommended dictionary. ###### 00:09:37.746 "\001\000\000\034" # Uses: 1 00:09:37.746 ###### End of recommended dictionary. ###### 00:09:37.746 Done 50 runs in 2 second(s) 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_2.conf /var/tmp/suppress_nvmf_fuzz 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=3 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_3.conf 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:38.005 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 3 00:09:38.006 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4403 00:09:38.006 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:09:38.006 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' 00:09:38.006 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4403"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:38.006 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:38.006 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:38.006 01:47:07 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4403' -c /tmp/fuzz_json_3.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 -Z 3 00:09:38.006 [2024-10-09 01:47:07.573743] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:38.006 [2024-10-09 01:47:07.573810] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041325 ] 00:09:38.264 [2024-10-09 01:47:07.770689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.264 [2024-10-09 01:47:07.811529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.264 [2024-10-09 01:47:07.870842] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.264 [2024-10-09 01:47:07.887049] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4403 *** 00:09:38.264 INFO: Running with entropic power schedule (0xFF, 100). 00:09:38.264 INFO: Seed: 1309163433 00:09:38.264 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:38.264 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:38.264 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_3 00:09:38.264 INFO: A corpus is not provided, starting from an empty corpus 00:09:38.264 #2 INITED exec/s: 0 rss: 66Mb 00:09:38.264 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:38.264 This may also happen if the target rejected all inputs we tried so far 00:09:38.783 NEW_FUNC[1/703]: 0x440c58 in fuzz_admin_abort_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:114 00:09:38.783 NEW_FUNC[2/703]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:38.783 #6 NEW cov: 11967 ft: 11968 corp: 2/6b lim: 20 exec/s: 0 rss: 73Mb L: 5/5 MS: 4 CrossOver-CrossOver-ShuffleBytes-CopyPart- 00:09:38.783 #10 NEW cov: 12106 ft: 13016 corp: 3/26b lim: 20 exec/s: 0 rss: 73Mb L: 20/20 MS: 4 ChangeByte-InsertByte-ChangeByte-InsertRepeatedBytes- 00:09:38.783 #11 NEW cov: 12112 ft: 13230 corp: 4/46b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertRepeatedBytes- 00:09:38.783 #12 NEW cov: 12197 ft: 13453 corp: 5/52b lim: 20 exec/s: 0 rss: 74Mb L: 6/20 MS: 1 CrossOver- 00:09:39.044 #13 NEW cov: 12197 ft: 13528 corp: 6/57b lim: 20 exec/s: 0 rss: 74Mb L: 5/20 MS: 1 ChangeByte- 00:09:39.044 #16 NEW cov: 12205 ft: 13723 corp: 7/70b lim: 20 exec/s: 0 rss: 74Mb L: 13/20 MS: 3 ChangeBinInt-ShuffleBytes-InsertRepeatedBytes- 00:09:39.044 #17 NEW cov: 12205 ft: 13758 corp: 8/85b lim: 20 exec/s: 0 rss: 74Mb L: 15/20 MS: 1 EraseBytes- 00:09:39.044 #18 NEW cov: 12205 ft: 13795 corp: 9/98b lim: 20 exec/s: 0 rss: 74Mb L: 13/20 MS: 1 CrossOver- 00:09:39.303 #19 NEW cov: 12205 ft: 13902 corp: 10/118b lim: 20 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 ShuffleBytes- 00:09:39.303 #20 NEW cov: 12205 ft: 13932 corp: 11/131b lim: 20 exec/s: 0 rss: 74Mb L: 13/20 MS: 1 ChangeByte- 00:09:39.303 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:39.303 #21 NEW cov: 12229 ft: 14179 corp: 12/141b lim: 20 exec/s: 0 rss: 74Mb L: 10/20 MS: 1 InsertRepeatedBytes- 00:09:39.303 #22 NEW cov: 12229 ft: 14204 corp: 13/154b lim: 20 exec/s: 0 rss: 74Mb L: 13/20 MS: 1 ChangeByte- 00:09:39.561 #23 NEW cov: 12229 ft: 14233 corp: 14/174b lim: 20 exec/s: 23 rss: 74Mb L: 20/20 MS: 1 ChangeBinInt- 00:09:39.561 #24 NEW cov: 12229 ft: 14260 corp: 15/194b lim: 20 exec/s: 24 rss: 74Mb L: 20/20 MS: 1 CopyPart- 00:09:39.561 #25 NEW cov: 12229 ft: 14270 corp: 16/204b lim: 20 exec/s: 25 rss: 74Mb L: 10/20 MS: 1 CopyPart- 00:09:39.561 #26 NEW cov: 12229 ft: 14311 corp: 17/210b lim: 20 exec/s: 26 rss: 74Mb L: 6/20 MS: 1 ChangeBit- 00:09:39.820 #27 NEW cov: 12229 ft: 14403 corp: 18/214b lim: 20 exec/s: 27 rss: 74Mb L: 4/20 MS: 1 EraseBytes- 00:09:39.820 #28 NEW cov: 12229 ft: 14419 corp: 19/219b lim: 20 exec/s: 28 rss: 74Mb L: 5/20 MS: 1 ChangeBit- 00:09:39.820 #29 NEW cov: 12229 ft: 14470 corp: 20/239b lim: 20 exec/s: 29 rss: 74Mb L: 20/20 MS: 1 ChangeByte- 00:09:39.820 #30 NEW cov: 12229 ft: 14490 corp: 21/244b lim: 20 exec/s: 30 rss: 74Mb L: 5/20 MS: 1 CopyPart- 00:09:40.079 #31 NEW cov: 12229 ft: 14504 corp: 22/254b lim: 20 exec/s: 31 rss: 74Mb L: 10/20 MS: 1 ShuffleBytes- 00:09:40.079 #32 NEW cov: 12229 ft: 14562 corp: 23/258b lim: 20 exec/s: 32 rss: 75Mb L: 4/20 MS: 1 ChangeBit- 00:09:40.079 #33 NEW cov: 12229 ft: 14612 corp: 24/268b lim: 20 exec/s: 33 rss: 75Mb L: 10/20 MS: 1 ChangeBit- 00:09:40.079 #34 NEW cov: 12229 ft: 14649 corp: 25/274b lim: 20 exec/s: 34 rss: 75Mb L: 6/20 MS: 1 ChangeBinInt- 00:09:40.079 #35 NEW cov: 12229 ft: 14705 corp: 26/288b lim: 20 exec/s: 35 rss: 75Mb L: 14/20 MS: 1 CMP- DE: "\000\000\000\000"- 00:09:40.338 #37 NEW cov: 12229 ft: 14728 corp: 27/295b lim: 20 exec/s: 37 rss: 75Mb L: 7/20 MS: 2 EraseBytes-PersAutoDict- DE: "\000\000\000\000"- 00:09:40.338 #38 NEW cov: 12229 ft: 14738 corp: 28/303b lim: 20 exec/s: 38 rss: 75Mb L: 8/20 MS: 1 EraseBytes- 00:09:40.338 #39 NEW cov: 12229 ft: 14773 corp: 29/308b lim: 20 exec/s: 39 rss: 75Mb L: 5/20 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:09:40.338 #40 NEW cov: 12229 ft: 14804 corp: 30/322b lim: 20 exec/s: 20 rss: 75Mb L: 14/20 MS: 1 CrossOver- 00:09:40.338 #40 DONE cov: 12229 ft: 14804 corp: 30/322b lim: 20 exec/s: 20 rss: 75Mb 00:09:40.338 ###### Recommended dictionary. ###### 00:09:40.338 "\000\000\000\000" # Uses: 2 00:09:40.338 ###### End of recommended dictionary. ###### 00:09:40.338 Done 40 runs in 2 second(s) 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_3.conf /var/tmp/suppress_nvmf_fuzz 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=4 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_4.conf 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 4 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4404 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4404"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:40.597 01:47:10 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4404' -c /tmp/fuzz_json_4.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 -Z 4 00:09:40.597 [2024-10-09 01:47:10.100047] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:40.597 [2024-10-09 01:47:10.100126] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4041682 ] 00:09:40.856 [2024-10-09 01:47:10.297006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.856 [2024-10-09 01:47:10.341459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.856 [2024-10-09 01:47:10.401765] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.856 [2024-10-09 01:47:10.417970] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4404 *** 00:09:40.856 INFO: Running with entropic power schedule (0xFF, 100). 00:09:40.856 INFO: Seed: 3841172017 00:09:40.856 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:40.856 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:40.856 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_4 00:09:40.856 INFO: A corpus is not provided, starting from an empty corpus 00:09:40.856 #2 INITED exec/s: 0 rss: 66Mb 00:09:40.856 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:40.856 This may also happen if the target rejected all inputs we tried so far 00:09:40.856 [2024-10-09 01:47:10.483948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:40.856 [2024-10-09 01:47:10.483977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:40.856 [2024-10-09 01:47:10.484029] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:40.856 [2024-10-09 01:47:10.484043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:40.857 [2024-10-09 01:47:10.484094] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:40.857 [2024-10-09 01:47:10.484109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:40.857 [2024-10-09 01:47:10.484159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:40.857 [2024-10-09 01:47:10.484172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:41.375 NEW_FUNC[1/715]: 0x441d58 in fuzz_admin_create_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:126 00:09:41.375 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:41.375 #7 NEW cov: 12076 ft: 12076 corp: 2/34b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 5 InsertByte-CrossOver-CrossOver-EraseBytes-InsertRepeatedBytes- 00:09:41.375 [2024-10-09 01:47:10.824732] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.824792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.824882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.824909] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.375 #21 NEW cov: 12206 ft: 13065 corp: 3/53b lim: 35 exec/s: 0 rss: 73Mb L: 19/33 MS: 4 ShuffleBytes-ChangeBit-CrossOver-InsertRepeatedBytes- 00:09:41.375 [2024-10-09 01:47:10.874934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.874963] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.875020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.875035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.875093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.875109] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.875164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.875178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:41.375 #27 NEW cov: 12212 ft: 13242 corp: 4/86b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ShuffleBytes- 00:09:41.375 [2024-10-09 01:47:10.935136] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.935165] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.935221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.935235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.935292] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.935306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.935362] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00080000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.935376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:41.375 #28 NEW cov: 12297 ft: 13507 corp: 5/119b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:09:41.375 [2024-10-09 01:47:10.975198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.975224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.975281] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.975295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.975351] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.975366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:10.975420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:10.975433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:41.375 #29 NEW cov: 12297 ft: 13642 corp: 6/152b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeByte- 00:09:41.375 [2024-10-09 01:47:11.015325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:11.015351] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:11.015412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:11.015427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:11.015483] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:11.015498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.375 [2024-10-09 01:47:11.015552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.375 [2024-10-09 01:47:11.015565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:41.644 #30 NEW cov: 12297 ft: 13766 corp: 7/185b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBinInt- 00:09:41.644 [2024-10-09 01:47:11.075183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.075210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.075268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.075283] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.644 #31 NEW cov: 12297 ft: 13861 corp: 8/200b lim: 35 exec/s: 0 rss: 73Mb L: 15/33 MS: 1 CrossOver- 00:09:41.644 [2024-10-09 01:47:11.115596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.115621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.115679] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.115694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.115748] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.115763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.115823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.115837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:41.644 #32 NEW cov: 12297 ft: 13944 corp: 9/233b lim: 35 exec/s: 0 rss: 73Mb L: 33/33 MS: 1 ChangeBit- 00:09:41.644 [2024-10-09 01:47:11.175805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.175837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.175895] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.175910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.175967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ff000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.175982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.176038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.176051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:41.644 #33 NEW cov: 12297 ft: 13971 corp: 10/266b lim: 35 exec/s: 0 rss: 74Mb L: 33/33 MS: 1 InsertRepeatedBytes- 00:09:41.644 [2024-10-09 01:47:11.235842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2d16f941 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.235868] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.235924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.235939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.235993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.236007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.644 #37 NEW cov: 12297 ft: 14206 corp: 11/293b lim: 35 exec/s: 0 rss: 74Mb L: 27/33 MS: 4 ChangeBinInt-InsertByte-InsertByte-InsertRepeatedBytes- 00:09:41.644 [2024-10-09 01:47:11.276227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.276253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.276309] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.276323] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.276380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.276394] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.276450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00080000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.276463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:41.644 [2024-10-09 01:47:11.276517] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:2c0a0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.644 [2024-10-09 01:47:11.276531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:41.903 #38 NEW cov: 12297 ft: 14316 corp: 12/328b lim: 35 exec/s: 0 rss: 74Mb L: 35/35 MS: 1 CrossOver- 00:09:41.903 [2024-10-09 01:47:11.336138] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.336166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.903 [2024-10-09 01:47:11.336225] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.336241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.903 [2024-10-09 01:47:11.336299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.336314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:41.903 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:41.903 #39 NEW cov: 12320 ft: 14360 corp: 13/353b lim: 35 exec/s: 0 rss: 74Mb L: 25/35 MS: 1 EraseBytes- 00:09:41.903 [2024-10-09 01:47:11.396099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.396125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.903 [2024-10-09 01:47:11.396183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.396197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.903 #40 NEW cov: 12320 ft: 14509 corp: 14/373b lim: 35 exec/s: 0 rss: 74Mb L: 20/35 MS: 1 CrossOver- 00:09:41.903 [2024-10-09 01:47:11.436164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.436190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.903 [2024-10-09 01:47:11.436249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.436263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.903 #41 NEW cov: 12320 ft: 14549 corp: 15/393b lim: 35 exec/s: 41 rss: 74Mb L: 20/35 MS: 1 CrossOver- 00:09:41.903 [2024-10-09 01:47:11.496349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.496374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.903 [2024-10-09 01:47:11.496431] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.496446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:41.903 #42 NEW cov: 12320 ft: 14574 corp: 16/413b lim: 35 exec/s: 42 rss: 74Mb L: 20/35 MS: 1 ShuffleBytes- 00:09:41.903 [2024-10-09 01:47:11.536444] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.536471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:41.903 [2024-10-09 01:47:11.536531] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:41.903 [2024-10-09 01:47:11.536546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.161 #43 NEW cov: 12320 ft: 14612 corp: 17/428b lim: 35 exec/s: 43 rss: 74Mb L: 15/35 MS: 1 ShuffleBytes- 00:09:42.162 [2024-10-09 01:47:11.596796] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2d16f941 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.596840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.162 [2024-10-09 01:47:11.596897] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.596912] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.162 [2024-10-09 01:47:11.596967] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.596982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.162 #44 NEW cov: 12320 ft: 14650 corp: 18/450b lim: 35 exec/s: 44 rss: 74Mb L: 22/35 MS: 1 EraseBytes- 00:09:42.162 [2024-10-09 01:47:11.657125] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.657152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.162 [2024-10-09 01:47:11.657211] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.657229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.162 [2024-10-09 01:47:11.657284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.657299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.162 [2024-10-09 01:47:11.657357] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:0000002f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.657372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:42.162 #45 NEW cov: 12320 ft: 14674 corp: 19/483b lim: 35 exec/s: 45 rss: 74Mb L: 33/35 MS: 1 ShuffleBytes- 00:09:42.162 [2024-10-09 01:47:11.716966] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.716993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.162 [2024-10-09 01:47:11.717050] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.717065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.162 #46 NEW cov: 12320 ft: 14718 corp: 20/502b lim: 35 exec/s: 46 rss: 74Mb L: 19/35 MS: 1 EraseBytes- 00:09:42.162 [2024-10-09 01:47:11.757087] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.757115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.162 [2024-10-09 01:47:11.757173] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.757187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.162 #47 NEW cov: 12320 ft: 14744 corp: 21/522b lim: 35 exec/s: 47 rss: 74Mb L: 20/35 MS: 1 CopyPart- 00:09:42.162 [2024-10-09 01:47:11.817249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.817276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.162 [2024-10-09 01:47:11.817335] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fff60000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.162 [2024-10-09 01:47:11.817350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.420 #48 NEW cov: 12320 ft: 14781 corp: 22/542b lim: 35 exec/s: 48 rss: 74Mb L: 20/35 MS: 1 ChangeBinInt- 00:09:42.420 [2024-10-09 01:47:11.857668] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.857695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.420 [2024-10-09 01:47:11.857754] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.857768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.420 [2024-10-09 01:47:11.857829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.857843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.420 [2024-10-09 01:47:11.857899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.857915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:42.420 #49 NEW cov: 12320 ft: 14787 corp: 23/575b lim: 35 exec/s: 49 rss: 74Mb L: 33/35 MS: 1 ChangeBinInt- 00:09:42.420 [2024-10-09 01:47:11.897450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.897478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.420 [2024-10-09 01:47:11.897534] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:01000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.897548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.420 #50 NEW cov: 12320 ft: 14828 corp: 24/595b lim: 35 exec/s: 50 rss: 74Mb L: 20/35 MS: 1 ChangeBinInt- 00:09:42.420 [2024-10-09 01:47:11.937575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.937602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.420 [2024-10-09 01:47:11.937662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.937677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.420 #51 NEW cov: 12320 ft: 14835 corp: 25/612b lim: 35 exec/s: 51 rss: 74Mb L: 17/35 MS: 1 EraseBytes- 00:09:42.420 [2024-10-09 01:47:11.997713] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00ff0003 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.997744] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.420 [2024-10-09 01:47:11.997802] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fff60000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:11.997821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.420 #52 NEW cov: 12320 ft: 14842 corp: 26/632b lim: 35 exec/s: 52 rss: 75Mb L: 20/35 MS: 1 ChangeByte- 00:09:42.420 [2024-10-09 01:47:12.057899] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:12.057926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.420 [2024-10-09 01:47:12.057984] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.420 [2024-10-09 01:47:12.057999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.420 #53 NEW cov: 12320 ft: 14862 corp: 27/652b lim: 35 exec/s: 53 rss: 75Mb L: 20/35 MS: 1 ShuffleBytes- 00:09:42.679 [2024-10-09 01:47:12.098025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.098052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.098109] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.098123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.679 #54 NEW cov: 12320 ft: 14907 corp: 28/672b lim: 35 exec/s: 54 rss: 75Mb L: 20/35 MS: 1 ShuffleBytes- 00:09:42.679 [2024-10-09 01:47:12.138429] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:1616f941 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.138455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.138514] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.138529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.138584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.138598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.138654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.138668] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:42.679 #55 NEW cov: 12320 ft: 14923 corp: 29/703b lim: 35 exec/s: 55 rss: 75Mb L: 31/35 MS: 1 CopyPart- 00:09:42.679 [2024-10-09 01:47:12.178393] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.178419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.178481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000001 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.178495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.178551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.178566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.679 #56 NEW cov: 12320 ft: 14937 corp: 30/729b lim: 35 exec/s: 56 rss: 75Mb L: 26/35 MS: 1 InsertByte- 00:09:42.679 [2024-10-09 01:47:12.238714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:0000d700 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.238741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.238798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.238818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.238874] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.238888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.238944] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:2f000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.238957] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:42.679 #57 NEW cov: 12320 ft: 14947 corp: 31/763b lim: 35 exec/s: 57 rss: 75Mb L: 34/35 MS: 1 InsertByte- 00:09:42.679 [2024-10-09 01:47:12.278575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.278602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.278658] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.278673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.679 #58 NEW cov: 12320 ft: 14970 corp: 32/778b lim: 35 exec/s: 58 rss: 75Mb L: 15/35 MS: 1 ShuffleBytes- 00:09:42.679 [2024-10-09 01:47:12.318829] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:2d167541 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.318855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.318913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.318928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.679 [2024-10-09 01:47:12.318987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:16161616 cdw11:16160000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.679 [2024-10-09 01:47:12.319001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.938 #59 NEW cov: 12320 ft: 14977 corp: 33/800b lim: 35 exec/s: 59 rss: 75Mb L: 22/35 MS: 1 ChangeByte- 00:09:42.938 [2024-10-09 01:47:12.378667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:a468ee55 cdw11:15240000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.378694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.938 #60 NEW cov: 12320 ft: 15660 corp: 34/809b lim: 35 exec/s: 60 rss: 75Mb L: 9/35 MS: 1 CMP- DE: "\356U\244h\025$'\000"- 00:09:42.938 [2024-10-09 01:47:12.419227] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.419253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.938 [2024-10-09 01:47:12.419313] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.419327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.938 [2024-10-09 01:47:12.419384] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.419398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.938 [2024-10-09 01:47:12.419455] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.419469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:42.938 #61 NEW cov: 12320 ft: 15690 corp: 35/841b lim: 35 exec/s: 61 rss: 75Mb L: 32/35 MS: 1 InsertRepeatedBytes- 00:09:42.938 [2024-10-09 01:47:12.479489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.479516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:42.938 [2024-10-09 01:47:12.479573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.479588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:42.938 [2024-10-09 01:47:12.479641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.479657] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:42.938 [2024-10-09 01:47:12.479715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO CQ (05) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:42.938 [2024-10-09 01:47:12.479728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:42.938 #62 NEW cov: 12320 ft: 15696 corp: 36/874b lim: 35 exec/s: 31 rss: 75Mb L: 33/35 MS: 1 ChangeBinInt- 00:09:42.938 #62 DONE cov: 12320 ft: 15696 corp: 36/874b lim: 35 exec/s: 31 rss: 75Mb 00:09:42.938 ###### Recommended dictionary. ###### 00:09:42.938 "\356U\244h\025$'\000" # Uses: 0 00:09:42.938 ###### End of recommended dictionary. ###### 00:09:42.938 Done 62 runs in 2 second(s) 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_4.conf /var/tmp/suppress_nvmf_fuzz 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=5 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_5.conf 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 5 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4405 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4405"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:43.196 01:47:12 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4405' -c /tmp/fuzz_json_5.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 -Z 5 00:09:43.196 [2024-10-09 01:47:12.665616] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:43.196 [2024-10-09 01:47:12.665682] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042037 ] 00:09:43.196 [2024-10-09 01:47:12.852047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.455 [2024-10-09 01:47:12.891394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.455 [2024-10-09 01:47:12.950669] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.455 [2024-10-09 01:47:12.966904] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4405 *** 00:09:43.455 INFO: Running with entropic power schedule (0xFF, 100). 00:09:43.455 INFO: Seed: 2092225597 00:09:43.455 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:43.455 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:43.455 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_5 00:09:43.455 INFO: A corpus is not provided, starting from an empty corpus 00:09:43.455 #2 INITED exec/s: 0 rss: 68Mb 00:09:43.455 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:43.455 This may also happen if the target rejected all inputs we tried so far 00:09:43.455 [2024-10-09 01:47:13.038865] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.455 [2024-10-09 01:47:13.038915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.455 [2024-10-09 01:47:13.039020] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.455 [2024-10-09 01:47:13.039040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:43.455 [2024-10-09 01:47:13.039146] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.455 [2024-10-09 01:47:13.039167] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:43.455 [2024-10-09 01:47:13.039272] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.455 [2024-10-09 01:47:13.039289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:43.713 NEW_FUNC[1/714]: 0x443ef8 in fuzz_admin_create_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:142 00:09:43.713 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:43.713 #4 NEW cov: 12102 ft: 12103 corp: 2/38b lim: 45 exec/s: 0 rss: 74Mb L: 37/37 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:09:43.713 [2024-10-09 01:47:13.378247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.713 [2024-10-09 01:47:13.378299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.713 [2024-10-09 01:47:13.378400] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.713 [2024-10-09 01:47:13.378421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:43.971 NEW_FUNC[1/1]: 0x1911f68 in nvme_qpair_get_state /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/./nvme_internal.h:1539 00:09:43.971 #5 NEW cov: 12218 ft: 13088 corp: 3/61b lim: 45 exec/s: 0 rss: 74Mb L: 23/37 MS: 1 InsertRepeatedBytes- 00:09:43.971 [2024-10-09 01:47:13.438248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.971 [2024-10-09 01:47:13.438274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.971 [2024-10-09 01:47:13.438367] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.971 [2024-10-09 01:47:13.438382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:43.971 #6 NEW cov: 12224 ft: 13207 corp: 4/84b lim: 45 exec/s: 0 rss: 74Mb L: 23/37 MS: 1 CrossOver- 00:09:43.971 [2024-10-09 01:47:13.508159] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.971 [2024-10-09 01:47:13.508185] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.971 #10 NEW cov: 12309 ft: 14160 corp: 5/99b lim: 45 exec/s: 0 rss: 74Mb L: 15/37 MS: 4 ChangeByte-CopyPart-EraseBytes-InsertRepeatedBytes- 00:09:43.971 [2024-10-09 01:47:13.558373] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0a00 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.971 [2024-10-09 01:47:13.558399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.971 #16 NEW cov: 12309 ft: 14352 corp: 6/115b lim: 45 exec/s: 0 rss: 74Mb L: 16/37 MS: 1 CrossOver- 00:09:43.971 [2024-10-09 01:47:13.629760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.971 [2024-10-09 01:47:13.629784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:43.971 [2024-10-09 01:47:13.629869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.971 [2024-10-09 01:47:13.629885] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:43.971 [2024-10-09 01:47:13.629985] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.971 [2024-10-09 01:47:13.630000] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:43.971 [2024-10-09 01:47:13.630092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:43.971 [2024-10-09 01:47:13.630108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:44.229 #22 NEW cov: 12309 ft: 14428 corp: 7/154b lim: 45 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 CMP- DE: "\004\000"- 00:09:44.229 [2024-10-09 01:47:13.698999] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.229 [2024-10-09 01:47:13.699026] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.229 #23 NEW cov: 12309 ft: 14558 corp: 8/168b lim: 45 exec/s: 0 rss: 74Mb L: 14/39 MS: 1 EraseBytes- 00:09:44.229 [2024-10-09 01:47:13.759185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0a00 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.229 [2024-10-09 01:47:13.759211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.229 #24 NEW cov: 12309 ft: 14623 corp: 9/178b lim: 45 exec/s: 0 rss: 74Mb L: 10/39 MS: 1 EraseBytes- 00:09:44.230 [2024-10-09 01:47:13.829448] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.230 [2024-10-09 01:47:13.829475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.230 #25 NEW cov: 12309 ft: 14693 corp: 10/194b lim: 45 exec/s: 0 rss: 74Mb L: 16/39 MS: 1 InsertByte- 00:09:44.230 [2024-10-09 01:47:13.880012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacaced cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.230 [2024-10-09 01:47:13.880038] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.230 [2024-10-09 01:47:13.880123] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.230 [2024-10-09 01:47:13.880138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.488 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:44.488 #26 NEW cov: 12332 ft: 14756 corp: 11/218b lim: 45 exec/s: 0 rss: 74Mb L: 24/39 MS: 1 InsertByte- 00:09:44.488 [2024-10-09 01:47:13.931099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:13.931123] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.488 [2024-10-09 01:47:13.931209] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:13.931224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.488 [2024-10-09 01:47:13.931320] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:13.931338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:44.488 [2024-10-09 01:47:13.931422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:13.931436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:44.488 #27 NEW cov: 12332 ft: 14794 corp: 12/257b lim: 45 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 PersAutoDict- DE: "\004\000"- 00:09:44.488 [2024-10-09 01:47:13.980666] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacaced cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:13.980691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.488 [2024-10-09 01:47:13.980776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:13.980792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.488 #28 NEW cov: 12332 ft: 14813 corp: 13/281b lim: 45 exec/s: 28 rss: 75Mb L: 24/39 MS: 1 ShuffleBytes- 00:09:44.488 [2024-10-09 01:47:14.051111] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0a00 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:14.051137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.488 [2024-10-09 01:47:14.051242] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:14.051258] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.488 #29 NEW cov: 12332 ft: 14836 corp: 14/307b lim: 45 exec/s: 29 rss: 75Mb L: 26/39 MS: 1 InsertRepeatedBytes- 00:09:44.488 [2024-10-09 01:47:14.101305] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacaced cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:14.101332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.488 [2024-10-09 01:47:14.101423] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.488 [2024-10-09 01:47:14.101439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.488 #30 NEW cov: 12332 ft: 14872 corp: 15/331b lim: 45 exec/s: 30 rss: 75Mb L: 24/39 MS: 1 ChangeBit- 00:09:44.747 [2024-10-09 01:47:14.171482] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.171509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.747 #31 NEW cov: 12332 ft: 14897 corp: 16/346b lim: 45 exec/s: 31 rss: 75Mb L: 15/39 MS: 1 ShuffleBytes- 00:09:44.747 [2024-10-09 01:47:14.222987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.223013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.747 [2024-10-09 01:47:14.223106] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.223125] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.747 [2024-10-09 01:47:14.223220] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:10000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.223234] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:44.747 [2024-10-09 01:47:14.223316] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.223330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:44.747 #32 NEW cov: 12332 ft: 14908 corp: 17/385b lim: 45 exec/s: 32 rss: 75Mb L: 39/39 MS: 1 ChangeBit- 00:09:44.747 [2024-10-09 01:47:14.292289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff0a00 cdw11:feff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.292315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.747 #33 NEW cov: 12332 ft: 14932 corp: 18/401b lim: 45 exec/s: 33 rss: 75Mb L: 16/39 MS: 1 ChangeBinInt- 00:09:44.747 [2024-10-09 01:47:14.343390] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.343415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.747 [2024-10-09 01:47:14.343501] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:82828282 cdw11:82820004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.343516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:44.747 [2024-10-09 01:47:14.343600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:82828282 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.343614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:44.747 #34 NEW cov: 12332 ft: 15144 corp: 19/430b lim: 45 exec/s: 34 rss: 75Mb L: 29/39 MS: 1 InsertRepeatedBytes- 00:09:44.747 [2024-10-09 01:47:14.413782] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.413807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:44.747 [2024-10-09 01:47:14.413901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:82828282 cdw11:82820004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:44.747 [2024-10-09 01:47:14.413917] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.006 [2024-10-09 01:47:14.414003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:82828282 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.006 [2024-10-09 01:47:14.414021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:45.006 #35 NEW cov: 12332 ft: 15154 corp: 20/459b lim: 45 exec/s: 35 rss: 75Mb L: 29/39 MS: 1 ChangeBinInt- 00:09:45.006 [2024-10-09 01:47:14.483790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacacad cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.006 [2024-10-09 01:47:14.483819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.006 [2024-10-09 01:47:14.483908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.006 [2024-10-09 01:47:14.483928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.006 #36 NEW cov: 12332 ft: 15207 corp: 21/483b lim: 45 exec/s: 36 rss: 75Mb L: 24/39 MS: 1 ChangeBit- 00:09:45.006 [2024-10-09 01:47:14.554052] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.006 [2024-10-09 01:47:14.554077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.006 [2024-10-09 01:47:14.554172] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.006 [2024-10-09 01:47:14.554187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.006 #37 NEW cov: 12332 ft: 15329 corp: 22/506b lim: 45 exec/s: 37 rss: 75Mb L: 23/39 MS: 1 ChangeBinInt- 00:09:45.006 [2024-10-09 01:47:14.624597] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.006 [2024-10-09 01:47:14.624622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.006 [2024-10-09 01:47:14.624711] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:82828282 cdw11:828b0004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.006 [2024-10-09 01:47:14.624727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.006 [2024-10-09 01:47:14.624817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:82828282 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.006 [2024-10-09 01:47:14.624833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:45.006 #38 NEW cov: 12332 ft: 15337 corp: 23/535b lim: 45 exec/s: 38 rss: 75Mb L: 29/39 MS: 1 ChangeBinInt- 00:09:45.264 [2024-10-09 01:47:14.675041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.264 [2024-10-09 01:47:14.675067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.264 [2024-10-09 01:47:14.675154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:82828282 cdw11:82820004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.264 [2024-10-09 01:47:14.675169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.264 [2024-10-09 01:47:14.675254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:82828282 cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.264 [2024-10-09 01:47:14.675269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:45.264 #39 NEW cov: 12332 ft: 15355 corp: 24/564b lim: 45 exec/s: 39 rss: 75Mb L: 29/39 MS: 1 ChangeBit- 00:09:45.264 [2024-10-09 01:47:14.744778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacaced cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.264 [2024-10-09 01:47:14.744804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.264 #40 NEW cov: 12332 ft: 15361 corp: 25/581b lim: 45 exec/s: 40 rss: 75Mb L: 17/39 MS: 1 EraseBytes- 00:09:45.264 [2024-10-09 01:47:14.795385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.264 [2024-10-09 01:47:14.795413] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.264 [2024-10-09 01:47:14.795495] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:ac960005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.264 [2024-10-09 01:47:14.795511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.264 #46 NEW cov: 12332 ft: 15366 corp: 26/605b lim: 45 exec/s: 46 rss: 75Mb L: 24/39 MS: 1 InsertByte- 00:09:45.264 [2024-10-09 01:47:14.845857] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacaced cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.264 [2024-10-09 01:47:14.845882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.264 [2024-10-09 01:47:14.845964] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.264 [2024-10-09 01:47:14.845978] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.265 #47 NEW cov: 12332 ft: 15373 corp: 27/629b lim: 45 exec/s: 47 rss: 75Mb L: 24/39 MS: 1 CopyPart- 00:09:45.265 [2024-10-09 01:47:14.896718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:acacaced cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.265 [2024-10-09 01:47:14.896743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.265 [2024-10-09 01:47:14.896832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:acacacac cdw11:acac0005 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.265 [2024-10-09 01:47:14.896847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.265 [2024-10-09 01:47:14.896934] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.265 [2024-10-09 01:47:14.896948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:45.265 [2024-10-09 01:47:14.897039] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.265 [2024-10-09 01:47:14.897053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:45.265 #48 NEW cov: 12332 ft: 15377 corp: 28/671b lim: 45 exec/s: 48 rss: 75Mb L: 42/42 MS: 1 InsertRepeatedBytes- 00:09:45.524 [2024-10-09 01:47:14.946953] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.524 [2024-10-09 01:47:14.946979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.524 [2024-10-09 01:47:14.947070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.524 [2024-10-09 01:47:14.947085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.524 [2024-10-09 01:47:14.947180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.524 [2024-10-09 01:47:14.947194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:45.524 [2024-10-09 01:47:14.947276] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:7 nsid:0 cdw10:00000400 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.524 [2024-10-09 01:47:14.947295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:45.524 #49 NEW cov: 12332 ft: 15391 corp: 29/710b lim: 45 exec/s: 49 rss: 75Mb L: 39/42 MS: 1 ChangeBinInt- 00:09:45.524 [2024-10-09 01:47:15.017124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:4 nsid:0 cdw10:ffff00ff cdw11:ffff0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.524 [2024-10-09 01:47:15.017150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:45.524 [2024-10-09 01:47:15.017236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:5 nsid:0 cdw10:82828282 cdw11:82820004 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.524 [2024-10-09 01:47:15.017252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:45.524 [2024-10-09 01:47:15.017356] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: CREATE IO SQ (01) qid:0 cid:6 nsid:0 cdw10:82828282 cdw11:ff5b0007 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:45.524 [2024-10-09 01:47:15.017371] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:45.524 #50 NEW cov: 12332 ft: 15398 corp: 30/740b lim: 45 exec/s: 25 rss: 75Mb L: 30/42 MS: 1 InsertByte- 00:09:45.524 #50 DONE cov: 12332 ft: 15398 corp: 30/740b lim: 45 exec/s: 25 rss: 75Mb 00:09:45.524 ###### Recommended dictionary. ###### 00:09:45.524 "\004\000" # Uses: 2 00:09:45.524 ###### End of recommended dictionary. ###### 00:09:45.524 Done 50 runs in 2 second(s) 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_5.conf /var/tmp/suppress_nvmf_fuzz 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=6 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_6.conf 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 6 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4406 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4406"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:45.524 01:47:15 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4406' -c /tmp/fuzz_json_6.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 -Z 6 00:09:45.783 [2024-10-09 01:47:15.214160] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:45.783 [2024-10-09 01:47:15.214231] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042398 ] 00:09:46.042 [2024-10-09 01:47:15.512728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.042 [2024-10-09 01:47:15.574782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.042 [2024-10-09 01:47:15.633886] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.042 [2024-10-09 01:47:15.650104] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4406 *** 00:09:46.042 INFO: Running with entropic power schedule (0xFF, 100). 00:09:46.042 INFO: Seed: 483225203 00:09:46.042 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:46.042 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:46.042 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_6 00:09:46.042 INFO: A corpus is not provided, starting from an empty corpus 00:09:46.042 #2 INITED exec/s: 0 rss: 66Mb 00:09:46.042 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:46.042 This may also happen if the target rejected all inputs we tried so far 00:09:46.042 [2024-10-09 01:47:15.705064] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000de0a cdw11:00000000 00:09:46.042 [2024-10-09 01:47:15.705103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.558 NEW_FUNC[1/713]: 0x446708 in fuzz_admin_delete_io_completion_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:161 00:09:46.558 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:46.558 #3 NEW cov: 12022 ft: 12020 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:09:46.558 [2024-10-09 01:47:16.055875] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a26 cdw11:00000000 00:09:46.558 [2024-10-09 01:47:16.055923] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.558 #4 NEW cov: 12135 ft: 12639 corp: 3/5b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 InsertByte- 00:09:46.558 [2024-10-09 01:47:16.115882] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:46.558 [2024-10-09 01:47:16.115918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.558 #6 NEW cov: 12141 ft: 12918 corp: 4/7b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 2 ShuffleBytes-CrossOver- 00:09:46.558 [2024-10-09 01:47:16.165993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000260a cdw11:00000000 00:09:46.558 [2024-10-09 01:47:16.166025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.816 #7 NEW cov: 12226 ft: 13205 corp: 5/9b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ShuffleBytes- 00:09:46.816 [2024-10-09 01:47:16.256336] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000260a cdw11:00000000 00:09:46.816 [2024-10-09 01:47:16.256372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.816 #8 NEW cov: 12226 ft: 13292 corp: 6/12b lim: 10 exec/s: 0 rss: 74Mb L: 3/3 MS: 1 CrossOver- 00:09:46.816 [2024-10-09 01:47:16.346717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:09:46.816 [2024-10-09 01:47:16.346750] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.816 [2024-10-09 01:47:16.346783] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:46.816 [2024-10-09 01:47:16.346799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:46.816 [2024-10-09 01:47:16.346842] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:46.816 [2024-10-09 01:47:16.346859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:46.816 [2024-10-09 01:47:16.346887] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:46.816 [2024-10-09 01:47:16.346903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:46.816 [2024-10-09 01:47:16.346931] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:09:46.816 [2024-10-09 01:47:16.346947] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:46.816 #9 NEW cov: 12226 ft: 13707 corp: 7/22b lim: 10 exec/s: 0 rss: 74Mb L: 10/10 MS: 1 InsertRepeatedBytes- 00:09:46.816 [2024-10-09 01:47:16.436717] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000b0b cdw11:00000000 00:09:46.816 [2024-10-09 01:47:16.436748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:46.816 #13 NEW cov: 12226 ft: 13858 corp: 8/24b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 4 EraseBytes-ChangeBit-CopyPart-CopyPart- 00:09:47.074 [2024-10-09 01:47:16.496861] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:47.074 [2024-10-09 01:47:16.496893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.074 #14 NEW cov: 12226 ft: 13931 corp: 9/26b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:09:47.074 [2024-10-09 01:47:16.547033] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000260a cdw11:00000000 00:09:47.074 [2024-10-09 01:47:16.547065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.074 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:47.074 #15 NEW cov: 12249 ft: 13970 corp: 10/29b lim: 10 exec/s: 0 rss: 74Mb L: 3/10 MS: 1 CrossOver- 00:09:47.074 [2024-10-09 01:47:16.607149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000be24 cdw11:00000000 00:09:47.074 [2024-10-09 01:47:16.607181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.074 #20 NEW cov: 12249 ft: 14011 corp: 11/31b lim: 10 exec/s: 0 rss: 74Mb L: 2/10 MS: 5 EraseBytes-ChangeByte-ChangeByte-ChangeByte-InsertByte- 00:09:47.074 [2024-10-09 01:47:16.697461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000de0a cdw11:00000000 00:09:47.074 [2024-10-09 01:47:16.697493] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.074 [2024-10-09 01:47:16.697523] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000de0a cdw11:00000000 00:09:47.074 [2024-10-09 01:47:16.697538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.074 #26 NEW cov: 12249 ft: 14211 corp: 12/35b lim: 10 exec/s: 26 rss: 74Mb L: 4/10 MS: 1 CopyPart- 00:09:47.332 [2024-10-09 01:47:16.757529] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000df0a cdw11:00000000 00:09:47.332 [2024-10-09 01:47:16.757561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.332 #27 NEW cov: 12249 ft: 14233 corp: 13/37b lim: 10 exec/s: 27 rss: 74Mb L: 2/10 MS: 1 ChangeBit- 00:09:47.332 [2024-10-09 01:47:16.807719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000de04 cdw11:00000000 00:09:47.332 [2024-10-09 01:47:16.807760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.333 [2024-10-09 01:47:16.807793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 00:09:47.333 [2024-10-09 01:47:16.807809] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.333 #28 NEW cov: 12249 ft: 14280 corp: 14/41b lim: 10 exec/s: 28 rss: 74Mb L: 4/10 MS: 1 ChangeBinInt- 00:09:47.333 [2024-10-09 01:47:16.897952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000250a cdw11:00000000 00:09:47.333 [2024-10-09 01:47:16.897988] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.333 #29 NEW cov: 12249 ft: 14333 corp: 15/44b lim: 10 exec/s: 29 rss: 74Mb L: 3/10 MS: 1 ChangeByte- 00:09:47.333 [2024-10-09 01:47:16.988188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:47.333 [2024-10-09 01:47:16.988222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.591 #31 NEW cov: 12249 ft: 14337 corp: 16/46b lim: 10 exec/s: 31 rss: 74Mb L: 2/10 MS: 2 EraseBytes-CopyPart- 00:09:47.591 [2024-10-09 01:47:17.078606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.078640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.591 [2024-10-09 01:47:17.078671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.078686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.591 [2024-10-09 01:47:17.078712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.078728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.591 [2024-10-09 01:47:17.078755] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.078771] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:47.591 [2024-10-09 01:47:17.078797] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.078822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:47.591 #32 NEW cov: 12249 ft: 14356 corp: 17/56b lim: 10 exec/s: 32 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:09:47.591 [2024-10-09 01:47:17.168696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000260a cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.168729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.591 [2024-10-09 01:47:17.168761] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009898 cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.168776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.591 #33 NEW cov: 12249 ft: 14389 corp: 18/61b lim: 10 exec/s: 33 rss: 74Mb L: 5/10 MS: 1 InsertRepeatedBytes- 00:09:47.591 [2024-10-09 01:47:17.219751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00002525 cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.219791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.591 [2024-10-09 01:47:17.219871] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.219891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.591 [2024-10-09 01:47:17.219956] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:47.591 [2024-10-09 01:47:17.219975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.849 #34 NEW cov: 12249 ft: 14548 corp: 19/67b lim: 10 exec/s: 34 rss: 74Mb L: 6/10 MS: 1 CrossOver- 00:09:47.849 [2024-10-09 01:47:17.279614] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:47.849 [2024-10-09 01:47:17.279641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.849 #35 NEW cov: 12249 ft: 14698 corp: 20/70b lim: 10 exec/s: 35 rss: 74Mb L: 3/10 MS: 1 ShuffleBytes- 00:09:47.849 [2024-10-09 01:47:17.320223] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:09:47.849 [2024-10-09 01:47:17.320251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.849 [2024-10-09 01:47:17.320307] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.849 [2024-10-09 01:47:17.320322] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.849 [2024-10-09 01:47:17.320377] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.849 [2024-10-09 01:47:17.320392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.849 [2024-10-09 01:47:17.320445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.320459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.320510] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.320525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:47.850 #36 NEW cov: 12249 ft: 14765 corp: 21/80b lim: 10 exec/s: 36 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:09:47.850 [2024-10-09 01:47:17.359952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:0000260a cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.359980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.360034] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:00009898 cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.360049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.850 #37 NEW cov: 12249 ft: 14796 corp: 22/85b lim: 10 exec/s: 37 rss: 74Mb L: 5/10 MS: 1 CopyPart- 00:09:47.850 [2024-10-09 01:47:17.420481] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.420508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.420562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.420577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.420629] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.420644] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.420696] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.420710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.420762] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff41 cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.420776] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:47.850 #38 NEW cov: 12249 ft: 14837 corp: 23/95b lim: 10 exec/s: 38 rss: 74Mb L: 10/10 MS: 1 ChangeByte- 00:09:47.850 [2024-10-09 01:47:17.460339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000afd cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.460366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.460420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000fdfd cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.460433] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.460489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000fdfd cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.460503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.850 #39 NEW cov: 12249 ft: 14859 corp: 24/102b lim: 10 exec/s: 39 rss: 74Mb L: 7/10 MS: 1 InsertRepeatedBytes- 00:09:47.850 [2024-10-09 01:47:17.500667] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.500696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.500751] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.500766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.500822] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.500837] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.500890] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.500903] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:47.850 [2024-10-09 01:47:17.500955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:09:47.850 [2024-10-09 01:47:17.500969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:48.108 #40 NEW cov: 12249 ft: 14882 corp: 25/112b lim: 10 exec/s: 40 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:09:48.108 [2024-10-09 01:47:17.540303] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:48.108 [2024-10-09 01:47:17.540330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:48.108 #41 NEW cov: 12249 ft: 14960 corp: 26/114b lim: 10 exec/s: 41 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:09:48.108 [2024-10-09 01:47:17.600979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000aff cdw11:00000000 00:09:48.108 [2024-10-09 01:47:17.601005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:48.108 [2024-10-09 01:47:17.601059] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:5 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:48.109 [2024-10-09 01:47:17.601073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:48.109 [2024-10-09 01:47:17.601124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:6 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:48.109 [2024-10-09 01:47:17.601139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:48.109 [2024-10-09 01:47:17.601191] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:7 nsid:0 cdw10:0000ffff cdw11:00000000 00:09:48.109 [2024-10-09 01:47:17.601205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:48.109 [2024-10-09 01:47:17.601260] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:8 nsid:0 cdw10:0000ff0a cdw11:00000000 00:09:48.109 [2024-10-09 01:47:17.601274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:48.109 #42 NEW cov: 12249 ft: 14987 corp: 27/124b lim: 10 exec/s: 42 rss: 74Mb L: 10/10 MS: 1 ShuffleBytes- 00:09:48.109 [2024-10-09 01:47:17.660714] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO CQ (04) qid:0 cid:4 nsid:0 cdw10:00000b0b cdw11:00000000 00:09:48.109 [2024-10-09 01:47:17.660740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:48.109 #43 NEW cov: 12249 ft: 15005 corp: 28/126b lim: 10 exec/s: 21 rss: 74Mb L: 2/10 MS: 1 ShuffleBytes- 00:09:48.109 #43 DONE cov: 12249 ft: 15005 corp: 28/126b lim: 10 exec/s: 21 rss: 74Mb 00:09:48.109 Done 43 runs in 2 second(s) 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_6.conf /var/tmp/suppress_nvmf_fuzz 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 7 1 0x1 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=7 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_7.conf 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 7 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4407 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4407"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:48.367 01:47:17 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4407' -c /tmp/fuzz_json_7.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 -Z 7 00:09:48.367 [2024-10-09 01:47:17.854127] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:48.367 [2024-10-09 01:47:17.854196] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4042754 ] 00:09:48.625 [2024-10-09 01:47:18.053090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.625 [2024-10-09 01:47:18.091605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.625 [2024-10-09 01:47:18.150572] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.625 [2024-10-09 01:47:18.166775] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4407 *** 00:09:48.625 INFO: Running with entropic power schedule (0xFF, 100). 00:09:48.625 INFO: Seed: 2997226258 00:09:48.625 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:48.625 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:48.625 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_7 00:09:48.625 INFO: A corpus is not provided, starting from an empty corpus 00:09:48.625 #2 INITED exec/s: 0 rss: 66Mb 00:09:48.625 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:48.625 This may also happen if the target rejected all inputs we tried so far 00:09:48.625 [2024-10-09 01:47:18.214364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:48.625 [2024-10-09 01:47:18.214393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:48.883 NEW_FUNC[1/713]: 0x447108 in fuzz_admin_delete_io_submission_queue_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:172 00:09:48.883 NEW_FUNC[2/713]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:48.883 #3 NEW cov: 12022 ft: 12020 corp: 2/3b lim: 10 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:09:48.883 [2024-10-09 01:47:18.545337] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:48.883 [2024-10-09 01:47:18.545376] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.142 #4 NEW cov: 12135 ft: 12638 corp: 3/6b lim: 10 exec/s: 0 rss: 73Mb L: 3/3 MS: 1 InsertByte- 00:09:49.142 [2024-10-09 01:47:18.605432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000d0a cdw11:00000000 00:09:49.142 [2024-10-09 01:47:18.605463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.142 #5 NEW cov: 12141 ft: 13022 corp: 4/8b lim: 10 exec/s: 0 rss: 73Mb L: 2/3 MS: 1 ChangeByte- 00:09:49.142 [2024-10-09 01:47:18.645611] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b0a cdw11:00000000 00:09:49.142 [2024-10-09 01:47:18.645638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.142 [2024-10-09 01:47:18.645693] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a5b cdw11:00000000 00:09:49.142 [2024-10-09 01:47:18.645707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.142 #6 NEW cov: 12226 ft: 13468 corp: 5/12b lim: 10 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 InsertByte- 00:09:49.142 [2024-10-09 01:47:18.705653] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:49.142 [2024-10-09 01:47:18.705680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.142 #7 NEW cov: 12226 ft: 13599 corp: 6/15b lim: 10 exec/s: 0 rss: 73Mb L: 3/4 MS: 1 InsertByte- 00:09:49.142 [2024-10-09 01:47:18.745790] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a04 cdw11:00000000 00:09:49.142 [2024-10-09 01:47:18.745822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.142 #8 NEW cov: 12226 ft: 13660 corp: 7/18b lim: 10 exec/s: 0 rss: 74Mb L: 3/4 MS: 1 ChangeBinInt- 00:09:49.142 [2024-10-09 01:47:18.806228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:49.142 [2024-10-09 01:47:18.806255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.142 [2024-10-09 01:47:18.806310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000b2b2 cdw11:00000000 00:09:49.142 [2024-10-09 01:47:18.806324] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.142 [2024-10-09 01:47:18.806392] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:0000b25b cdw11:00000000 00:09:49.142 [2024-10-09 01:47:18.806407] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.400 #9 NEW cov: 12226 ft: 13905 corp: 8/24b lim: 10 exec/s: 0 rss: 74Mb L: 6/6 MS: 1 InsertRepeatedBytes- 00:09:49.400 [2024-10-09 01:47:18.846192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b0a cdw11:00000000 00:09:49.400 [2024-10-09 01:47:18.846229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.400 [2024-10-09 01:47:18.846284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000ad9 cdw11:00000000 00:09:49.400 [2024-10-09 01:47:18.846298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.400 #10 NEW cov: 12226 ft: 13961 corp: 9/28b lim: 10 exec/s: 0 rss: 74Mb L: 4/6 MS: 1 ChangeByte- 00:09:49.400 [2024-10-09 01:47:18.906228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:49.400 [2024-10-09 01:47:18.906254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.400 #11 NEW cov: 12226 ft: 13978 corp: 10/30b lim: 10 exec/s: 0 rss: 74Mb L: 2/6 MS: 1 ShuffleBytes- 00:09:49.400 [2024-10-09 01:47:18.946321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:49.400 [2024-10-09 01:47:18.946348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.400 #12 NEW cov: 12226 ft: 14054 corp: 11/32b lim: 10 exec/s: 0 rss: 74Mb L: 2/6 MS: 1 CrossOver- 00:09:49.400 [2024-10-09 01:47:18.986452] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:09:49.400 [2024-10-09 01:47:18.986478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.400 #13 NEW cov: 12226 ft: 14103 corp: 12/35b lim: 10 exec/s: 0 rss: 74Mb L: 3/6 MS: 1 ShuffleBytes- 00:09:49.400 [2024-10-09 01:47:19.026565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:49.400 [2024-10-09 01:47:19.026590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.400 #14 NEW cov: 12226 ft: 14151 corp: 13/38b lim: 10 exec/s: 0 rss: 74Mb L: 3/6 MS: 1 CrossOver- 00:09:49.659 [2024-10-09 01:47:19.086743] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.086770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.659 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:49.659 #15 NEW cov: 12249 ft: 14187 corp: 14/41b lim: 10 exec/s: 0 rss: 74Mb L: 3/6 MS: 1 ShuffleBytes- 00:09:49.659 [2024-10-09 01:47:19.126883] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000acb cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.126910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.659 #16 NEW cov: 12249 ft: 14203 corp: 15/44b lim: 10 exec/s: 0 rss: 74Mb L: 3/6 MS: 1 ChangeByte- 00:09:49.659 [2024-10-09 01:47:19.187077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000acb cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.187104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.659 #17 NEW cov: 12249 ft: 14231 corp: 16/47b lim: 10 exec/s: 17 rss: 74Mb L: 3/6 MS: 1 ChangeBit- 00:09:49.659 [2024-10-09 01:47:19.247571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000d83 cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.247598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.659 [2024-10-09 01:47:19.247656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.247670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.659 [2024-10-09 01:47:19.247725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008383 cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.247740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.659 [2024-10-09 01:47:19.247794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008383 cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.247807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.659 #18 NEW cov: 12249 ft: 14510 corp: 17/56b lim: 10 exec/s: 18 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:49.659 [2024-10-09 01:47:19.307485] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000acb cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.307512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.659 [2024-10-09 01:47:19.307569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003b30 cdw11:00000000 00:09:49.659 [2024-10-09 01:47:19.307584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.917 #19 NEW cov: 12249 ft: 14545 corp: 18/60b lim: 10 exec/s: 19 rss: 74Mb L: 4/9 MS: 1 InsertByte- 00:09:49.917 [2024-10-09 01:47:19.347826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.917 [2024-10-09 01:47:19.347853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.917 [2024-10-09 01:47:19.347909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.917 [2024-10-09 01:47:19.347924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.917 [2024-10-09 01:47:19.347980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 00:09:49.917 [2024-10-09 01:47:19.347994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.917 [2024-10-09 01:47:19.348051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 00:09:49.917 [2024-10-09 01:47:19.348066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.917 #20 NEW cov: 12249 ft: 14567 corp: 19/69b lim: 10 exec/s: 20 rss: 74Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:49.918 [2024-10-09 01:47:19.407753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.407779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.918 [2024-10-09 01:47:19.407840] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000025b cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.407855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.918 #21 NEW cov: 12249 ft: 14592 corp: 20/73b lim: 10 exec/s: 21 rss: 74Mb L: 4/9 MS: 1 InsertByte- 00:09:49.918 [2024-10-09 01:47:19.448114] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000d83 cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.448141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.918 [2024-10-09 01:47:19.448199] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.448214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.918 [2024-10-09 01:47:19.448268] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008383 cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.448284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:49.918 [2024-10-09 01:47:19.448341] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008383 cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.448357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:49.918 #22 NEW cov: 12249 ft: 14608 corp: 21/82b lim: 10 exec/s: 22 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:49.918 [2024-10-09 01:47:19.508081] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00005b0a cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.508108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.918 [2024-10-09 01:47:19.508164] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a5b cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.508179] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:49.918 #27 NEW cov: 12249 ft: 14625 corp: 22/86b lim: 10 exec/s: 27 rss: 74Mb L: 4/9 MS: 5 CrossOver-ChangeBit-CrossOver-CopyPart-CrossOver- 00:09:49.918 [2024-10-09 01:47:19.548113] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:000004cb cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.548140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:49.918 [2024-10-09 01:47:19.548196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003b30 cdw11:00000000 00:09:49.918 [2024-10-09 01:47:19.548210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.176 #28 NEW cov: 12249 ft: 14686 corp: 23/90b lim: 10 exec/s: 28 rss: 74Mb L: 4/9 MS: 1 ChangeBinInt- 00:09:50.176 [2024-10-09 01:47:19.608569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000d83 cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.608597] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.176 [2024-10-09 01:47:19.608655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00008383 cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.608669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.176 [2024-10-09 01:47:19.608725] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00008383 cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.608740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:50.176 [2024-10-09 01:47:19.608793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00008383 cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.608808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:50.176 #29 NEW cov: 12249 ft: 14702 corp: 24/99b lim: 10 exec/s: 29 rss: 74Mb L: 9/9 MS: 1 ShuffleBytes- 00:09:50.176 [2024-10-09 01:47:19.648427] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000430 cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.648453] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.176 [2024-10-09 01:47:19.648508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00003bcb cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.648522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.176 #30 NEW cov: 12249 ft: 14750 corp: 25/103b lim: 10 exec/s: 30 rss: 74Mb L: 4/9 MS: 1 ShuffleBytes- 00:09:50.176 [2024-10-09 01:47:19.708594] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.708621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.176 [2024-10-09 01:47:19.708676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a5b cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.708689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.176 #31 NEW cov: 12249 ft: 14752 corp: 26/107b lim: 10 exec/s: 31 rss: 75Mb L: 4/9 MS: 1 CMP- DE: "\000\004"- 00:09:50.176 [2024-10-09 01:47:19.768661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.768689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.176 #32 NEW cov: 12249 ft: 14766 corp: 27/110b lim: 10 exec/s: 32 rss: 75Mb L: 3/9 MS: 1 ShuffleBytes- 00:09:50.176 [2024-10-09 01:47:19.828786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a3b cdw11:00000000 00:09:50.176 [2024-10-09 01:47:19.828816] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.435 #33 NEW cov: 12249 ft: 14780 corp: 28/113b lim: 10 exec/s: 33 rss: 75Mb L: 3/9 MS: 1 CrossOver- 00:09:50.435 [2024-10-09 01:47:19.869041] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a02 cdw11:00000000 00:09:50.435 [2024-10-09 01:47:19.869069] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.435 [2024-10-09 01:47:19.869128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a5b cdw11:00000000 00:09:50.435 [2024-10-09 01:47:19.869142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.435 #34 NEW cov: 12249 ft: 14806 corp: 29/117b lim: 10 exec/s: 34 rss: 75Mb L: 4/9 MS: 1 ShuffleBytes- 00:09:50.435 [2024-10-09 01:47:19.929491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:50.435 [2024-10-09 01:47:19.929520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.435 [2024-10-09 01:47:19.929576] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00006969 cdw11:00000000 00:09:50.435 [2024-10-09 01:47:19.929590] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.435 [2024-10-09 01:47:19.929644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:6 nsid:0 cdw10:00006969 cdw11:00000000 00:09:50.435 [2024-10-09 01:47:19.929659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:50.435 [2024-10-09 01:47:19.929712] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:7 nsid:0 cdw10:00006969 cdw11:00000000 00:09:50.435 [2024-10-09 01:47:19.929725] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:50.435 #35 NEW cov: 12249 ft: 14835 corp: 30/126b lim: 10 exec/s: 35 rss: 75Mb L: 9/9 MS: 1 InsertRepeatedBytes- 00:09:50.435 [2024-10-09 01:47:19.969329] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 00:09:50.435 [2024-10-09 01:47:19.969357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.435 [2024-10-09 01:47:19.969412] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a04 cdw11:00000000 00:09:50.435 [2024-10-09 01:47:19.969427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.435 #36 NEW cov: 12249 ft: 14851 corp: 31/130b lim: 10 exec/s: 36 rss: 75Mb L: 4/9 MS: 1 CrossOver- 00:09:50.435 [2024-10-09 01:47:20.009386] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:50.435 [2024-10-09 01:47:20.009416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.435 #37 NEW cov: 12249 ft: 14890 corp: 32/132b lim: 10 exec/s: 37 rss: 75Mb L: 2/9 MS: 1 CopyPart- 00:09:50.435 [2024-10-09 01:47:20.049486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a2a cdw11:00000000 00:09:50.435 [2024-10-09 01:47:20.049517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.435 #38 NEW cov: 12249 ft: 14958 corp: 33/134b lim: 10 exec/s: 38 rss: 75Mb L: 2/9 MS: 1 ChangeBit- 00:09:50.435 [2024-10-09 01:47:20.089656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000004 cdw11:00000000 00:09:50.435 [2024-10-09 01:47:20.089696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.435 [2024-10-09 01:47:20.089781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:00000a5b cdw11:00000000 00:09:50.435 [2024-10-09 01:47:20.089799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.694 #39 NEW cov: 12249 ft: 15036 corp: 34/138b lim: 10 exec/s: 39 rss: 75Mb L: 4/9 MS: 1 ShuffleBytes- 00:09:50.694 [2024-10-09 01:47:20.149777] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00000a0a cdw11:00000000 00:09:50.694 [2024-10-09 01:47:20.149807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.694 #40 NEW cov: 12249 ft: 15070 corp: 35/140b lim: 10 exec/s: 40 rss: 75Mb L: 2/9 MS: 1 CopyPart- 00:09:50.694 [2024-10-09 01:47:20.189943] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:4 nsid:0 cdw10:00003b04 cdw11:00000000 00:09:50.694 [2024-10-09 01:47:20.189969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:50.694 [2024-10-09 01:47:20.190023] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DELETE IO SQ (00) qid:0 cid:5 nsid:0 cdw10:0000cb30 cdw11:00000000 00:09:50.694 [2024-10-09 01:47:20.190037] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:50.694 #41 NEW cov: 12249 ft: 15114 corp: 36/144b lim: 10 exec/s: 20 rss: 75Mb L: 4/9 MS: 1 ShuffleBytes- 00:09:50.694 #41 DONE cov: 12249 ft: 15114 corp: 36/144b lim: 10 exec/s: 20 rss: 75Mb 00:09:50.694 ###### Recommended dictionary. ###### 00:09:50.694 "\000\004" # Uses: 0 00:09:50.694 ###### End of recommended dictionary. ###### 00:09:50.694 Done 41 runs in 2 second(s) 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_7.conf /var/tmp/suppress_nvmf_fuzz 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 8 1 0x1 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=8 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_8.conf 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 8 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4408 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4408"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:50.694 01:47:20 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4408' -c /tmp/fuzz_json_8.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 -Z 8 00:09:50.952 [2024-10-09 01:47:20.387156] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:50.952 [2024-10-09 01:47:20.387233] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043112 ] 00:09:50.952 [2024-10-09 01:47:20.597166] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.210 [2024-10-09 01:47:20.636984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.210 [2024-10-09 01:47:20.696237] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.210 [2024-10-09 01:47:20.712444] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4408 *** 00:09:51.210 INFO: Running with entropic power schedule (0xFF, 100). 00:09:51.210 INFO: Seed: 1249273126 00:09:51.210 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:51.210 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:51.210 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_8 00:09:51.210 INFO: A corpus is not provided, starting from an empty corpus 00:09:51.210 [2024-10-09 01:47:20.790139] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.210 [2024-10-09 01:47:20.790182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:51.210 #2 INITED cov: 12031 ft: 12032 corp: 1/1b exec/s: 0 rss: 72Mb 00:09:51.210 [2024-10-09 01:47:20.841565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.210 [2024-10-09 01:47:20.841592] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:51.210 [2024-10-09 01:47:20.841710] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.210 [2024-10-09 01:47:20.841727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:51.210 [2024-10-09 01:47:20.841827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.210 [2024-10-09 01:47:20.841843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:51.210 [2024-10-09 01:47:20.841951] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.210 [2024-10-09 01:47:20.841968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:51.210 [2024-10-09 01:47:20.842073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.210 [2024-10-09 01:47:20.842089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:51.730 NEW_FUNC[1/1]: 0xf67c38 in rte_get_timer_cycles /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:94 00:09:51.730 #3 NEW cov: 12162 ft: 13422 corp: 2/6b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:09:51.730 [2024-10-09 01:47:21.202635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.202686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.202792] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.202819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.202918] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.202939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.203035] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.203056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.203155] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.203175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:51.730 #4 NEW cov: 12168 ft: 13709 corp: 3/11b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:09:51.730 [2024-10-09 01:47:21.272745] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.272772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.272877] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.272894] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.272998] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.273014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.273097] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.273111] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.273198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.273213] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:51.730 #5 NEW cov: 12253 ft: 13942 corp: 4/16b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 InsertRepeatedBytes- 00:09:51.730 [2024-10-09 01:47:21.323150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.323175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.323263] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.323279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.323368] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.323384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.323477] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.323492] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.323585] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.323599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:51.730 #6 NEW cov: 12253 ft: 14078 corp: 5/21b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ChangeBinInt- 00:09:51.730 [2024-10-09 01:47:21.373188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.373212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.373299] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.373315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:51.730 [2024-10-09 01:47:21.373406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.730 [2024-10-09 01:47:21.373420] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:51.731 [2024-10-09 01:47:21.373516] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.731 [2024-10-09 01:47:21.373532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:51.731 [2024-10-09 01:47:21.373623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:51.731 [2024-10-09 01:47:21.373637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.052 #7 NEW cov: 12253 ft: 14198 corp: 6/26b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:09:52.052 [2024-10-09 01:47:21.443827] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.443854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.443970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.443985] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.444077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.444092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.444192] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.444207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.444297] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.444311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.052 #8 NEW cov: 12253 ft: 14259 corp: 7/31b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:09:52.052 [2024-10-09 01:47:21.494070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.494095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.494194] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.494208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.494326] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.494341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.494430] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.494446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.494539] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.494553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.052 #9 NEW cov: 12253 ft: 14351 corp: 8/36b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:09:52.052 [2024-10-09 01:47:21.564623] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.564647] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.564738] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.564754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.564866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.564881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.564974] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.564989] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.565075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.565089] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.052 #10 NEW cov: 12253 ft: 14389 corp: 9/41b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:09:52.052 [2024-10-09 01:47:21.614422] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.614446] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.614562] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.614578] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.614662] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.614677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.614764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.614778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.052 #11 NEW cov: 12253 ft: 14465 corp: 10/45b lim: 5 exec/s: 0 rss: 74Mb L: 4/5 MS: 1 EraseBytes- 00:09:52.052 [2024-10-09 01:47:21.665310] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.665334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.665432] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.665448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.665540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.665556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.665651] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.665667] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.052 [2024-10-09 01:47:21.665759] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.052 [2024-10-09 01:47:21.665775] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.344 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:52.344 #12 NEW cov: 12276 ft: 14525 corp: 11/50b lim: 5 exec/s: 0 rss: 74Mb L: 5/5 MS: 1 CopyPart- 00:09:52.344 [2024-10-09 01:47:21.734382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.734408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.784570] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.784595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.344 #14 NEW cov: 12276 ft: 14547 corp: 12/51b lim: 5 exec/s: 14 rss: 74Mb L: 1/5 MS: 2 ChangeBit-CopyPart- 00:09:52.344 [2024-10-09 01:47:21.836409] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.836436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.836528] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.836544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.836641] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.836655] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.836744] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.836759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.836866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.836882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.344 #15 NEW cov: 12276 ft: 14581 corp: 13/56b lim: 5 exec/s: 15 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:09:52.344 [2024-10-09 01:47:21.886425] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.886450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.886540] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.886555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.886644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.886661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.886752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.886766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.344 #16 NEW cov: 12276 ft: 14588 corp: 14/60b lim: 5 exec/s: 16 rss: 74Mb L: 4/5 MS: 1 CrossOver- 00:09:52.344 [2024-10-09 01:47:21.957250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.957275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.957382] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.957399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.957492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.957510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.957600] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.957616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.344 [2024-10-09 01:47:21.957708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.344 [2024-10-09 01:47:21.957724] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.344 #17 NEW cov: 12276 ft: 14615 corp: 15/65b lim: 5 exec/s: 17 rss: 74Mb L: 5/5 MS: 1 ShuffleBytes- 00:09:52.603 [2024-10-09 01:47:22.026129] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.026154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.603 #18 NEW cov: 12276 ft: 14641 corp: 16/66b lim: 5 exec/s: 18 rss: 74Mb L: 1/5 MS: 1 CopyPart- 00:09:52.603 [2024-10-09 01:47:22.097776] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.097800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.097905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.097922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.098008] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.098035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.098133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.098148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.098236] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.098253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.603 #19 NEW cov: 12276 ft: 14662 corp: 17/71b lim: 5 exec/s: 19 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:09:52.603 [2024-10-09 01:47:22.147825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.147850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.147942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.147956] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.148054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.148072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.148168] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.148183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.603 #20 NEW cov: 12276 ft: 14715 corp: 18/75b lim: 5 exec/s: 20 rss: 74Mb L: 4/5 MS: 1 ShuffleBytes- 00:09:52.603 [2024-10-09 01:47:22.218556] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.218582] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.218674] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.218689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.218778] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.218794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.218892] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.218908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.219003] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.219020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.603 #21 NEW cov: 12276 ft: 14726 corp: 19/80b lim: 5 exec/s: 21 rss: 74Mb L: 5/5 MS: 1 ChangeBit- 00:09:52.603 [2024-10-09 01:47:22.268793] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.268823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.268926] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.268942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.269048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.269063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.269148] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.269161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.603 [2024-10-09 01:47:22.269248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.603 [2024-10-09 01:47:22.269265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.862 #22 NEW cov: 12276 ft: 14729 corp: 20/85b lim: 5 exec/s: 22 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:09:52.862 [2024-10-09 01:47:22.318922] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.862 [2024-10-09 01:47:22.318948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.862 [2024-10-09 01:47:22.319037] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.862 [2024-10-09 01:47:22.319053] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.862 [2024-10-09 01:47:22.319147] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.862 [2024-10-09 01:47:22.319162] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.862 [2024-10-09 01:47:22.319250] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.862 [2024-10-09 01:47:22.319265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.862 [2024-10-09 01:47:22.319359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.862 [2024-10-09 01:47:22.319374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.862 #23 NEW cov: 12276 ft: 14741 corp: 21/90b lim: 5 exec/s: 23 rss: 74Mb L: 5/5 MS: 1 CrossOver- 00:09:52.862 [2024-10-09 01:47:22.369449] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.862 [2024-10-09 01:47:22.369474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.862 [2024-10-09 01:47:22.369577] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.862 [2024-10-09 01:47:22.369595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.862 [2024-10-09 01:47:22.369692] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.862 [2024-10-09 01:47:22.369708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.862 [2024-10-09 01:47:22.369805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.369825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.369923] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.369939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.863 #24 NEW cov: 12276 ft: 14754 corp: 22/95b lim: 5 exec/s: 24 rss: 74Mb L: 5/5 MS: 1 ChangeByte- 00:09:52.863 [2024-10-09 01:47:22.419700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.419729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.419811] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.419833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.419916] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.419930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.420019] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.420036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.420121] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.420138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.863 #25 NEW cov: 12276 ft: 14755 corp: 23/100b lim: 5 exec/s: 25 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:09:52.863 [2024-10-09 01:47:22.489826] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.489853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.489939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.489955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.490045] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.490063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.490149] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.490163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:52.863 [2024-10-09 01:47:22.490259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:52.863 [2024-10-09 01:47:22.490275] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:52.863 #26 NEW cov: 12276 ft: 14775 corp: 24/105b lim: 5 exec/s: 26 rss: 75Mb L: 5/5 MS: 1 ShuffleBytes- 00:09:53.122 [2024-10-09 01:47:22.539536] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.539562] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.539649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.539669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.539753] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.539769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:53.122 #27 NEW cov: 12276 ft: 15009 corp: 25/108b lim: 5 exec/s: 27 rss: 75Mb L: 3/5 MS: 1 EraseBytes- 00:09:53.122 [2024-10-09 01:47:22.610205] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.610231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.610319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.610334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.610428] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.610444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.610530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.610545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.610638] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.610654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:53.122 #28 NEW cov: 12276 ft: 15041 corp: 26/113b lim: 5 exec/s: 28 rss: 75Mb L: 5/5 MS: 1 ChangeBinInt- 00:09:53.122 [2024-10-09 01:47:22.680689] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.680714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.680820] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.680836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.680942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.680959] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.681057] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.681074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.681167] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.681186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:53.122 #29 NEW cov: 12276 ft: 15073 corp: 27/118b lim: 5 exec/s: 29 rss: 75Mb L: 5/5 MS: 1 CrossOver- 00:09:53.122 [2024-10-09 01:47:22.751158] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.751184] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.751284] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.751299] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.751389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:6 nsid:0 cdw10:0000000a cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.751403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.751500] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.751515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:53.122 [2024-10-09 01:47:22.751608] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE ATTACHMENT (15) qid:0 cid:8 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.122 [2024-10-09 01:47:22.751621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:53.122 #30 NEW cov: 12276 ft: 15109 corp: 28/123b lim: 5 exec/s: 15 rss: 75Mb L: 5/5 MS: 1 ChangeBinInt- 00:09:53.122 #30 DONE cov: 12276 ft: 15109 corp: 28/123b lim: 5 exec/s: 15 rss: 75Mb 00:09:53.122 Done 30 runs in 2 second(s) 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_8.conf /var/tmp/suppress_nvmf_fuzz 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 9 1 0x1 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=9 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_9.conf 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 9 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4409 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4409"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:53.381 01:47:22 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4409' -c /tmp/fuzz_json_9.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 -Z 9 00:09:53.381 [2024-10-09 01:47:22.932862] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:53.381 [2024-10-09 01:47:22.932948] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043473 ] 00:09:53.640 [2024-10-09 01:47:23.130692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.640 [2024-10-09 01:47:23.168785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.640 [2024-10-09 01:47:23.227916] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.640 [2024-10-09 01:47:23.244116] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4409 *** 00:09:53.640 INFO: Running with entropic power schedule (0xFF, 100). 00:09:53.640 INFO: Seed: 3781264215 00:09:53.640 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:53.640 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:53.640 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_9 00:09:53.640 INFO: A corpus is not provided, starting from an empty corpus 00:09:53.640 [2024-10-09 01:47:23.289515] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.640 [2024-10-09 01:47:23.289545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.897 #2 INITED cov: 12050 ft: 12040 corp: 1/1b exec/s: 0 rss: 72Mb 00:09:53.897 [2024-10-09 01:47:23.329677] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.329703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.897 [2024-10-09 01:47:23.329756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.329769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:53.897 #3 NEW cov: 12163 ft: 13247 corp: 2/3b lim: 5 exec/s: 0 rss: 72Mb L: 2/2 MS: 1 InsertByte- 00:09:53.897 [2024-10-09 01:47:23.389700] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.389727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.897 #4 NEW cov: 12169 ft: 13412 corp: 3/4b lim: 5 exec/s: 0 rss: 73Mb L: 1/2 MS: 1 ChangeByte- 00:09:53.897 [2024-10-09 01:47:23.429911] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.429937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.897 [2024-10-09 01:47:23.429991] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.430005] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:53.897 #5 NEW cov: 12254 ft: 13621 corp: 4/6b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 ChangeASCIIInt- 00:09:53.897 [2024-10-09 01:47:23.490103] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.490129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.897 [2024-10-09 01:47:23.490183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.490197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:53.897 #6 NEW cov: 12254 ft: 13762 corp: 5/8b lim: 5 exec/s: 0 rss: 73Mb L: 2/2 MS: 1 CopyPart- 00:09:53.897 [2024-10-09 01:47:23.530505] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.530533] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:53.897 [2024-10-09 01:47:23.530587] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.530601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:53.897 [2024-10-09 01:47:23.530657] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.530672] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:53.897 [2024-10-09 01:47:23.530726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:53.897 [2024-10-09 01:47:23.530739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:54.156 #7 NEW cov: 12254 ft: 14299 corp: 6/12b lim: 5 exec/s: 0 rss: 73Mb L: 4/4 MS: 1 InsertRepeatedBytes- 00:09:54.156 [2024-10-09 01:47:23.590235] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000003 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.156 [2024-10-09 01:47:23.590263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.156 #8 NEW cov: 12254 ft: 14410 corp: 7/13b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 EraseBytes- 00:09:54.156 [2024-10-09 01:47:23.650401] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.156 [2024-10-09 01:47:23.650427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.156 #9 NEW cov: 12254 ft: 14445 corp: 8/14b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 EraseBytes- 00:09:54.156 [2024-10-09 01:47:23.690824] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.156 [2024-10-09 01:47:23.690852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.156 [2024-10-09 01:47:23.690907] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.156 [2024-10-09 01:47:23.690922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.156 [2024-10-09 01:47:23.690975] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.156 [2024-10-09 01:47:23.690993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:54.156 #10 NEW cov: 12254 ft: 14648 corp: 9/17b lim: 5 exec/s: 0 rss: 73Mb L: 3/4 MS: 1 CrossOver- 00:09:54.156 [2024-10-09 01:47:23.750818] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.156 [2024-10-09 01:47:23.750846] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.156 [2024-10-09 01:47:23.750900] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.156 [2024-10-09 01:47:23.750916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.156 #11 NEW cov: 12254 ft: 14686 corp: 10/19b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeByte- 00:09:54.156 [2024-10-09 01:47:23.790810] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.156 [2024-10-09 01:47:23.790842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.156 #12 NEW cov: 12254 ft: 14727 corp: 11/20b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ShuffleBytes- 00:09:54.415 [2024-10-09 01:47:23.831074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:23.831100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.415 [2024-10-09 01:47:23.831156] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:23.831170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.415 #13 NEW cov: 12254 ft: 14812 corp: 12/22b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 CMP- DE: "\377\001"- 00:09:54.415 [2024-10-09 01:47:23.891207] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:23.891233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.415 [2024-10-09 01:47:23.891289] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:23.891303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.415 #14 NEW cov: 12254 ft: 14854 corp: 13/24b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 PersAutoDict- DE: "\377\001"- 00:09:54.415 [2024-10-09 01:47:23.951380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:23.951406] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.415 [2024-10-09 01:47:23.951461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:23.951475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.415 #15 NEW cov: 12254 ft: 14878 corp: 14/26b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ChangeBit- 00:09:54.415 [2024-10-09 01:47:23.991486] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:23.991516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.415 [2024-10-09 01:47:23.991571] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:23.991584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.415 #16 NEW cov: 12254 ft: 14898 corp: 15/28b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 CrossOver- 00:09:54.415 [2024-10-09 01:47:24.031475] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:24.031503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.415 #17 NEW cov: 12254 ft: 14955 corp: 16/29b lim: 5 exec/s: 0 rss: 73Mb L: 1/4 MS: 1 ChangeByte- 00:09:54.415 [2024-10-09 01:47:24.071707] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:24.071733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.415 [2024-10-09 01:47:24.071788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000008 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.415 [2024-10-09 01:47:24.071803] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.672 #18 NEW cov: 12254 ft: 14978 corp: 17/31b lim: 5 exec/s: 0 rss: 73Mb L: 2/4 MS: 1 ShuffleBytes- 00:09:54.672 [2024-10-09 01:47:24.132051] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.672 [2024-10-09 01:47:24.132077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.672 [2024-10-09 01:47:24.132133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.672 [2024-10-09 01:47:24.132148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.672 [2024-10-09 01:47:24.132203] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.672 [2024-10-09 01:47:24.132217] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:54.930 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:54.930 #19 NEW cov: 12277 ft: 15029 corp: 18/34b lim: 5 exec/s: 19 rss: 74Mb L: 3/4 MS: 1 ChangeByte- 00:09:54.930 [2024-10-09 01:47:24.472756] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.930 [2024-10-09 01:47:24.472792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.930 #20 NEW cov: 12277 ft: 15059 corp: 19/35b lim: 5 exec/s: 20 rss: 74Mb L: 1/4 MS: 1 CrossOver- 00:09:54.930 [2024-10-09 01:47:24.533182] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.930 [2024-10-09 01:47:24.533210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.930 [2024-10-09 01:47:24.533270] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.930 [2024-10-09 01:47:24.533284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.930 [2024-10-09 01:47:24.533339] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.930 [2024-10-09 01:47:24.533354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:54.930 #21 NEW cov: 12277 ft: 15078 corp: 20/38b lim: 5 exec/s: 21 rss: 74Mb L: 3/4 MS: 1 CrossOver- 00:09:54.930 [2024-10-09 01:47:24.573302] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.930 [2024-10-09 01:47:24.573329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:54.930 [2024-10-09 01:47:24.573387] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.930 [2024-10-09 01:47:24.573402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:54.930 [2024-10-09 01:47:24.573459] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:54.930 [2024-10-09 01:47:24.573474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.188 #22 NEW cov: 12277 ft: 15091 corp: 21/41b lim: 5 exec/s: 22 rss: 74Mb L: 3/4 MS: 1 ShuffleBytes- 00:09:55.188 [2024-10-09 01:47:24.633124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.188 [2024-10-09 01:47:24.633150] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.188 #23 NEW cov: 12277 ft: 15123 corp: 22/42b lim: 5 exec/s: 23 rss: 74Mb L: 1/4 MS: 1 ShuffleBytes- 00:09:55.188 [2024-10-09 01:47:24.673255] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.188 [2024-10-09 01:47:24.673282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.188 #24 NEW cov: 12277 ft: 15158 corp: 23/43b lim: 5 exec/s: 24 rss: 75Mb L: 1/4 MS: 1 EraseBytes- 00:09:55.188 [2024-10-09 01:47:24.733847] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.188 [2024-10-09 01:47:24.733873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.188 [2024-10-09 01:47:24.733929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.188 [2024-10-09 01:47:24.733944] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.188 [2024-10-09 01:47:24.734004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.188 [2024-10-09 01:47:24.734018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.188 [2024-10-09 01:47:24.734074] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.188 [2024-10-09 01:47:24.734092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.188 #25 NEW cov: 12277 ft: 15214 corp: 24/47b lim: 5 exec/s: 25 rss: 75Mb L: 4/4 MS: 1 InsertByte- 00:09:55.188 [2024-10-09 01:47:24.773676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.188 [2024-10-09 01:47:24.773703] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.188 [2024-10-09 01:47:24.773760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-09 01:47:24.773774] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.189 #26 NEW cov: 12277 ft: 15231 corp: 25/49b lim: 5 exec/s: 26 rss: 75Mb L: 2/4 MS: 1 ShuffleBytes- 00:09:55.189 [2024-10-09 01:47:24.813747] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-09 01:47:24.813773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.189 [2024-10-09 01:47:24.813850] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-09 01:47:24.813866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.189 #27 NEW cov: 12277 ft: 15274 corp: 26/51b lim: 5 exec/s: 27 rss: 75Mb L: 2/4 MS: 1 InsertByte- 00:09:55.189 [2024-10-09 01:47:24.853786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.189 [2024-10-09 01:47:24.853818] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.447 #28 NEW cov: 12277 ft: 15319 corp: 27/52b lim: 5 exec/s: 28 rss: 75Mb L: 1/4 MS: 1 CrossOver- 00:09:55.447 [2024-10-09 01:47:24.914233] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.914262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.447 [2024-10-09 01:47:24.914319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:0000000c cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.914333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.447 [2024-10-09 01:47:24.914389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.914403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.447 #29 NEW cov: 12277 ft: 15393 corp: 28/55b lim: 5 exec/s: 29 rss: 75Mb L: 3/4 MS: 1 ChangeByte- 00:09:55.447 [2024-10-09 01:47:24.954492] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.954520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.447 [2024-10-09 01:47:24.954575] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.954589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.447 [2024-10-09 01:47:24.954649] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.954664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.447 [2024-10-09 01:47:24.954718] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.954732] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.447 #30 NEW cov: 12277 ft: 15398 corp: 29/59b lim: 5 exec/s: 30 rss: 75Mb L: 4/4 MS: 1 PersAutoDict- DE: "\377\001"- 00:09:55.447 [2024-10-09 01:47:24.994419] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.994448] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.447 [2024-10-09 01:47:24.994508] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:24.994524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.447 #31 NEW cov: 12277 ft: 15446 corp: 30/61b lim: 5 exec/s: 31 rss: 75Mb L: 2/4 MS: 1 InsertByte- 00:09:55.447 [2024-10-09 01:47:25.034290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000005 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:25.034317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.447 #32 NEW cov: 12277 ft: 15499 corp: 31/62b lim: 5 exec/s: 32 rss: 75Mb L: 1/4 MS: 1 ChangeByte- 00:09:55.447 [2024-10-09 01:47:25.094986] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:25.095013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.447 [2024-10-09 01:47:25.095070] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.447 [2024-10-09 01:47:25.095084] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.447 [2024-10-09 01:47:25.095140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.448 [2024-10-09 01:47:25.095154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.448 [2024-10-09 01:47:25.095210] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000002 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.448 [2024-10-09 01:47:25.095223] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.706 #33 NEW cov: 12277 ft: 15524 corp: 32/66b lim: 5 exec/s: 33 rss: 75Mb L: 4/4 MS: 1 ChangeBinInt- 00:09:55.706 [2024-10-09 01:47:25.155104] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.706 [2024-10-09 01:47:25.155130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.706 [2024-10-09 01:47:25.155188] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.706 [2024-10-09 01:47:25.155205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.706 [2024-10-09 01:47:25.155261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:6 nsid:0 cdw10:0000000f cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.706 [2024-10-09 01:47:25.155276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:55.706 [2024-10-09 01:47:25.155330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.706 [2024-10-09 01:47:25.155344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:55.706 #34 NEW cov: 12277 ft: 15530 corp: 33/70b lim: 5 exec/s: 34 rss: 75Mb L: 4/4 MS: 1 InsertByte- 00:09:55.706 [2024-10-09 01:47:25.194734] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.706 [2024-10-09 01:47:25.194760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.706 #35 NEW cov: 12277 ft: 15539 corp: 34/71b lim: 5 exec/s: 35 rss: 75Mb L: 1/4 MS: 1 ShuffleBytes- 00:09:55.706 [2024-10-09 01:47:25.234994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000006 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.706 [2024-10-09 01:47:25.235020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.706 [2024-10-09 01:47:25.235075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.706 [2024-10-09 01:47:25.235090] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:55.706 #36 NEW cov: 12277 ft: 15551 corp: 35/73b lim: 5 exec/s: 36 rss: 75Mb L: 2/4 MS: 1 InsertByte- 00:09:55.706 [2024-10-09 01:47:25.274935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: NAMESPACE MANAGEMENT (0d) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:55.706 [2024-10-09 01:47:25.274961] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:55.706 #37 NEW cov: 12277 ft: 15557 corp: 36/74b lim: 5 exec/s: 18 rss: 75Mb L: 1/4 MS: 1 ChangeBit- 00:09:55.706 #37 DONE cov: 12277 ft: 15557 corp: 36/74b lim: 5 exec/s: 18 rss: 75Mb 00:09:55.706 ###### Recommended dictionary. ###### 00:09:55.706 "\377\001" # Uses: 2 00:09:55.706 ###### End of recommended dictionary. ###### 00:09:55.706 Done 37 runs in 2 second(s) 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_9.conf /var/tmp/suppress_nvmf_fuzz 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 10 1 0x1 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=10 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_10.conf 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 10 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4410 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4410"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:55.965 01:47:25 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4410' -c /tmp/fuzz_json_10.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 -Z 10 00:09:55.965 [2024-10-09 01:47:25.455563] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:55.965 [2024-10-09 01:47:25.455647] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043832 ] 00:09:56.224 [2024-10-09 01:47:25.652590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.224 [2024-10-09 01:47:25.691263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.224 [2024-10-09 01:47:25.750205] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.224 [2024-10-09 01:47:25.766413] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4410 *** 00:09:56.224 INFO: Running with entropic power schedule (0xFF, 100). 00:09:56.224 INFO: Seed: 2009294782 00:09:56.224 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:56.224 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:56.224 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_10 00:09:56.224 INFO: A corpus is not provided, starting from an empty corpus 00:09:56.224 #2 INITED exec/s: 0 rss: 66Mb 00:09:56.224 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:56.224 This may also happen if the target rejected all inputs we tried so far 00:09:56.224 [2024-10-09 01:47:25.831884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.224 [2024-10-09 01:47:25.831915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.790 NEW_FUNC[1/714]: 0x448a88 in fuzz_admin_security_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:205 00:09:56.790 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:56.790 #9 NEW cov: 12073 ft: 12073 corp: 2/11b lim: 40 exec/s: 0 rss: 73Mb L: 10/10 MS: 2 InsertByte-CMP- DE: "H\000\000\000\000\000\000\000"- 00:09:56.790 [2024-10-09 01:47:26.172833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:00001900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.172879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.790 #10 NEW cov: 12186 ft: 12599 corp: 3/21b lim: 40 exec/s: 0 rss: 73Mb L: 10/10 MS: 1 ChangeByte- 00:09:56.790 [2024-10-09 01:47:26.232869] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:e1000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.232901] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.790 #11 NEW cov: 12192 ft: 12798 corp: 4/32b lim: 40 exec/s: 0 rss: 74Mb L: 11/11 MS: 1 InsertByte- 00:09:56.790 [2024-10-09 01:47:26.293372] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.293398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.293473] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.293488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.293545] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.293559] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.293617] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.293630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:56.790 #14 NEW cov: 12277 ft: 13795 corp: 5/70b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 3 ChangeBit-ChangeByte-InsertRepeatedBytes- 00:09:56.790 [2024-10-09 01:47:26.333127] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:00481900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.333152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.790 #20 NEW cov: 12277 ft: 14016 corp: 6/80b lim: 40 exec/s: 0 rss: 74Mb L: 10/38 MS: 1 CopyPart- 00:09:56.790 [2024-10-09 01:47:26.373221] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00ffffff cdw11:ffff4800 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.373246] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.790 #21 NEW cov: 12277 ft: 14076 corp: 7/95b lim: 40 exec/s: 0 rss: 74Mb L: 15/38 MS: 1 InsertRepeatedBytes- 00:09:56.790 [2024-10-09 01:47:26.413719] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:f6c93434 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.413745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.413805] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.413825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.413884] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.413898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.413955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.413968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:56.790 #24 NEW cov: 12277 ft: 14171 corp: 8/129b lim: 40 exec/s: 0 rss: 74Mb L: 34/38 MS: 3 InsertRepeatedBytes-ChangeBinInt-InsertRepeatedBytes- 00:09:56.790 [2024-10-09 01:47:26.453901] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.453927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.453987] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.454002] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.454058] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.454072] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:56.790 [2024-10-09 01:47:26.454132] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:56.790 [2024-10-09 01:47:26.454145] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.049 #25 NEW cov: 12277 ft: 14184 corp: 9/167b lim: 40 exec/s: 0 rss: 74Mb L: 38/38 MS: 1 ShuffleBytes- 00:09:57.049 [2024-10-09 01:47:26.513584] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:66004819 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.049 [2024-10-09 01:47:26.513609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.049 #26 NEW cov: 12277 ft: 14230 corp: 10/178b lim: 40 exec/s: 0 rss: 74Mb L: 11/38 MS: 1 InsertByte- 00:09:57.049 [2024-10-09 01:47:26.573799] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:00001900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.049 [2024-10-09 01:47:26.573830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.049 #27 NEW cov: 12277 ft: 14351 corp: 11/192b lim: 40 exec/s: 0 rss: 74Mb L: 14/38 MS: 1 CMP- DE: "\023\001\000\000"- 00:09:57.049 [2024-10-09 01:47:26.613929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00400000 cdw11:0a480000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.049 [2024-10-09 01:47:26.613954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.049 #32 NEW cov: 12277 ft: 14390 corp: 12/205b lim: 40 exec/s: 0 rss: 74Mb L: 13/38 MS: 5 EraseBytes-CopyPart-EraseBytes-ChangeBit-PersAutoDict- DE: "H\000\000\000\000\000\000\000"- 00:09:57.049 [2024-10-09 01:47:26.654048] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02480000 cdw11:00001900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.049 [2024-10-09 01:47:26.654073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.049 #33 NEW cov: 12277 ft: 14394 corp: 13/215b lim: 40 exec/s: 0 rss: 74Mb L: 10/38 MS: 1 ChangeBinInt- 00:09:57.049 [2024-10-09 01:47:26.694122] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:e1000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.049 [2024-10-09 01:47:26.694147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.308 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:57.308 #34 NEW cov: 12300 ft: 14428 corp: 14/227b lim: 40 exec/s: 0 rss: 74Mb L: 12/38 MS: 1 InsertByte- 00:09:57.308 [2024-10-09 01:47:26.754676] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.754705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.754764] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.754779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.754841] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.754855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.754914] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.754928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.308 #35 NEW cov: 12300 ft: 14469 corp: 15/266b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 CrossOver- 00:09:57.308 [2024-10-09 01:47:26.794399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00400000 cdw11:0a480000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.794424] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.308 #36 NEW cov: 12300 ft: 14501 corp: 16/279b lim: 40 exec/s: 36 rss: 74Mb L: 13/39 MS: 1 ShuffleBytes- 00:09:57.308 [2024-10-09 01:47:26.855093] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.855118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.855178] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.855192] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.855249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53485353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.855262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.855321] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.855335] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.855391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:8 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.855405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:09:57.308 #37 NEW cov: 12300 ft: 14580 corp: 17/319b lim: 40 exec/s: 37 rss: 74Mb L: 40/40 MS: 1 CrossOver- 00:09:57.308 [2024-10-09 01:47:26.914866] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00004800 cdw11:00004819 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.914891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.914954] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00004800 cdw11:00004819 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.914968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.308 #38 NEW cov: 12300 ft: 14795 corp: 18/338b lim: 40 exec/s: 38 rss: 74Mb L: 19/40 MS: 1 CopyPart- 00:09:57.308 [2024-10-09 01:47:26.955259] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.955285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.955347] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.955361] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.955421] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:80535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.955434] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.308 [2024-10-09 01:47:26.955493] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.308 [2024-10-09 01:47:26.955506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.567 #39 NEW cov: 12300 ft: 14805 corp: 19/377b lim: 40 exec/s: 39 rss: 74Mb L: 39/40 MS: 1 InsertByte- 00:09:57.567 [2024-10-09 01:47:26.995359] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:26.995385] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.567 [2024-10-09 01:47:26.995462] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53530000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:26.995476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.567 [2024-10-09 01:47:26.995535] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00275353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:26.995549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.567 [2024-10-09 01:47:26.995606] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:26.995620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.567 #40 NEW cov: 12300 ft: 14837 corp: 20/416b lim: 40 exec/s: 40 rss: 74Mb L: 39/40 MS: 1 ChangeBinInt- 00:09:57.567 [2024-10-09 01:47:27.055170] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00400000 cdw11:0a480000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:27.055194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.567 #41 NEW cov: 12300 ft: 14872 corp: 21/429b lim: 40 exec/s: 41 rss: 74Mb L: 13/40 MS: 1 ChangeBit- 00:09:57.567 [2024-10-09 01:47:27.095228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00400000 cdw11:02480000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:27.095257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.567 #42 NEW cov: 12300 ft: 14876 corp: 22/442b lim: 40 exec/s: 42 rss: 74Mb L: 13/40 MS: 1 ChangeBit- 00:09:57.567 [2024-10-09 01:47:27.155569] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:00001900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:27.155595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.567 [2024-10-09 01:47:27.155654] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00130192 cdw11:92920000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:27.155669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.567 #43 NEW cov: 12300 ft: 14951 corp: 23/459b lim: 40 exec/s: 43 rss: 74Mb L: 17/40 MS: 1 InsertRepeatedBytes- 00:09:57.567 [2024-10-09 01:47:27.215715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:e1000019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:27.215740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.567 [2024-10-09 01:47:27.215823] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00040000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.567 [2024-10-09 01:47:27.215838] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.825 #44 NEW cov: 12300 ft: 14962 corp: 24/478b lim: 40 exec/s: 44 rss: 74Mb L: 19/40 MS: 1 CMP- DE: "\000\004\000\000\000\000\000\000"- 00:09:57.825 [2024-10-09 01:47:27.255833] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:08001900 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.825 [2024-10-09 01:47:27.255858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.825 [2024-10-09 01:47:27.255920] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00130192 cdw11:92920000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.825 [2024-10-09 01:47:27.255934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.825 #45 NEW cov: 12300 ft: 14988 corp: 25/495b lim: 40 exec/s: 45 rss: 74Mb L: 17/40 MS: 1 ChangeBinInt- 00:09:57.826 [2024-10-09 01:47:27.316278] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.316303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.826 [2024-10-09 01:47:27.316364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53530000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.316377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.826 [2024-10-09 01:47:27.316436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00275353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.316450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.826 [2024-10-09 01:47:27.316509] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.316523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.826 #46 NEW cov: 12300 ft: 15004 corp: 26/534b lim: 40 exec/s: 46 rss: 75Mb L: 39/40 MS: 1 ChangeBit- 00:09:57.826 [2024-10-09 01:47:27.376445] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.376470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.826 [2024-10-09 01:47:27.376532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53530000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.376546] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.826 [2024-10-09 01:47:27.376605] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00275300 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.376619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:57.826 [2024-10-09 01:47:27.376678] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:04000000 cdw11:00000013 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.376692] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:57.826 #47 NEW cov: 12300 ft: 15019 corp: 27/573b lim: 40 exec/s: 47 rss: 75Mb L: 39/40 MS: 1 PersAutoDict- DE: "\000\004\000\000\000\000\000\000"- 00:09:57.826 [2024-10-09 01:47:27.436349] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:ca080019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.436373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:57.826 [2024-10-09 01:47:27.436436] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00001301 cdw11:92929200 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:57.826 [2024-10-09 01:47:27.436450] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:57.826 #48 NEW cov: 12300 ft: 15070 corp: 28/591b lim: 40 exec/s: 48 rss: 75Mb L: 18/40 MS: 1 InsertByte- 00:09:58.084 [2024-10-09 01:47:27.496406] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:e1002d19 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.084 [2024-10-09 01:47:27.496432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.084 #49 NEW cov: 12300 ft: 15123 corp: 29/602b lim: 40 exec/s: 49 rss: 75Mb L: 11/40 MS: 1 ChangeByte- 00:09:58.084 [2024-10-09 01:47:27.536854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.084 [2024-10-09 01:47:27.536880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.085 [2024-10-09 01:47:27.536938] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53530000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.536951] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.085 [2024-10-09 01:47:27.537007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:00008000 cdw11:00275353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.537020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.085 [2024-10-09 01:47:27.537078] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535313 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.537097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:58.085 #50 NEW cov: 12300 ft: 15184 corp: 30/641b lim: 40 exec/s: 50 rss: 75Mb L: 39/40 MS: 1 ChangeBit- 00:09:58.085 [2024-10-09 01:47:27.576573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000048 cdw11:1900000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.576599] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.085 #51 NEW cov: 12300 ft: 15217 corp: 31/649b lim: 40 exec/s: 51 rss: 75Mb L: 8/40 MS: 1 EraseBytes- 00:09:58.085 [2024-10-09 01:47:27.616722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00500000 cdw11:0a480000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.616748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.085 #52 NEW cov: 12300 ft: 15229 corp: 32/662b lim: 40 exec/s: 52 rss: 75Mb L: 13/40 MS: 1 ChangeBit- 00:09:58.085 [2024-10-09 01:47:27.656972] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00480000 cdw11:ca080019 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.656997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.085 [2024-10-09 01:47:27.657054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:00001301 cdw11:92190000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.657068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.085 #53 NEW cov: 12300 ft: 15246 corp: 33/680b lim: 40 exec/s: 53 rss: 75Mb L: 18/40 MS: 1 CrossOver- 00:09:58.085 [2024-10-09 01:47:27.717399] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:c5535353 cdw11:53535153 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.717425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.085 [2024-10-09 01:47:27.717480] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:5 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.717494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:58.085 [2024-10-09 01:47:27.717551] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:6 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.717565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:58.085 [2024-10-09 01:47:27.717622] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:7 nsid:0 cdw10:53535353 cdw11:53535353 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.085 [2024-10-09 01:47:27.717636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:58.085 #54 NEW cov: 12300 ft: 15278 corp: 34/718b lim: 40 exec/s: 54 rss: 75Mb L: 38/40 MS: 1 ChangeBit- 00:09:58.344 [2024-10-09 01:47:27.757099] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:00000010 cdw11:1900000a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.344 [2024-10-09 01:47:27.757126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.344 #55 NEW cov: 12300 ft: 15322 corp: 35/726b lim: 40 exec/s: 55 rss: 75Mb L: 8/40 MS: 1 CrossOver- 00:09:58.344 [2024-10-09 01:47:27.817291] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY RECEIVE (82) qid:0 cid:4 nsid:0 cdw10:02400000 cdw11:0a480000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:58.344 [2024-10-09 01:47:27.817320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:58.344 #56 NEW cov: 12300 ft: 15366 corp: 36/739b lim: 40 exec/s: 28 rss: 75Mb L: 13/40 MS: 1 ChangeBit- 00:09:58.344 #56 DONE cov: 12300 ft: 15366 corp: 36/739b lim: 40 exec/s: 28 rss: 75Mb 00:09:58.344 ###### Recommended dictionary. ###### 00:09:58.344 "H\000\000\000\000\000\000\000" # Uses: 1 00:09:58.344 "\023\001\000\000" # Uses: 0 00:09:58.344 "\000\004\000\000\000\000\000\000" # Uses: 1 00:09:58.344 ###### End of recommended dictionary. ###### 00:09:58.344 Done 56 runs in 2 second(s) 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_10.conf /var/tmp/suppress_nvmf_fuzz 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 11 1 0x1 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=11 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_11.conf 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 11 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4411 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4411"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:09:58.344 01:47:27 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4411' -c /tmp/fuzz_json_11.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 -Z 11 00:09:58.344 [2024-10-09 01:47:27.997851] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:09:58.344 [2024-10-09 01:47:27.997935] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044185 ] 00:09:58.604 [2024-10-09 01:47:28.196163] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.604 [2024-10-09 01:47:28.235399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.866 [2024-10-09 01:47:28.294480] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:58.866 [2024-10-09 01:47:28.310696] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4411 *** 00:09:58.866 INFO: Running with entropic power schedule (0xFF, 100). 00:09:58.866 INFO: Seed: 256322244 00:09:58.866 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:09:58.866 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:09:58.866 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_11 00:09:58.866 INFO: A corpus is not provided, starting from an empty corpus 00:09:58.866 #2 INITED exec/s: 0 rss: 67Mb 00:09:58.866 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:09:58.866 This may also happen if the target rejected all inputs we tried so far 00:09:58.866 [2024-10-09 01:47:28.358446] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f9 cdw11:f9f9f9f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:58.866 [2024-10-09 01:47:28.358474] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.126 NEW_FUNC[1/715]: 0x44a7f8 in fuzz_admin_security_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:223 00:09:59.126 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:09:59.126 #4 NEW cov: 12080 ft: 12083 corp: 2/9b lim: 40 exec/s: 0 rss: 74Mb L: 8/8 MS: 2 CopyPart-InsertRepeatedBytes- 00:09:59.126 [2024-10-09 01:47:28.679248] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f9 cdw11:f9f9fa02 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.679290] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.126 #25 NEW cov: 12198 ft: 12605 corp: 3/17b lim: 40 exec/s: 0 rss: 74Mb L: 8/8 MS: 1 ChangeBinInt- 00:09:59.126 [2024-10-09 01:47:28.739760] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.739788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.126 [2024-10-09 01:47:28.739852] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.739867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.126 [2024-10-09 01:47:28.739924] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.739937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.126 [2024-10-09 01:47:28.739994] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.740007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:59.126 #29 NEW cov: 12204 ft: 13676 corp: 4/53b lim: 40 exec/s: 0 rss: 74Mb L: 36/36 MS: 4 ShuffleBytes-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:09:59.126 [2024-10-09 01:47:28.779854] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.779879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.126 [2024-10-09 01:47:28.779935] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.779949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.126 [2024-10-09 01:47:28.780005] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.780018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.126 [2024-10-09 01:47:28.780075] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff3d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.126 [2024-10-09 01:47:28.780091] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:59.384 #34 NEW cov: 12289 ft: 13917 corp: 5/88b lim: 40 exec/s: 0 rss: 74Mb L: 35/36 MS: 5 ChangeByte-CopyPart-InsertByte-ChangeBit-CrossOver- 00:09:59.384 [2024-10-09 01:47:28.819990] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:28.820016] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.385 [2024-10-09 01:47:28.820072] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:28.820085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.385 [2024-10-09 01:47:28.820140] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:28.820153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.385 [2024-10-09 01:47:28.820208] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:e6e6e6ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:28.820221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:59.385 #35 NEW cov: 12289 ft: 14006 corp: 6/127b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:09:59.385 [2024-10-09 01:47:28.879661] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3a0a9ff9 cdw11:e93a0a9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:28.879687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.385 #40 NEW cov: 12289 ft: 14183 corp: 7/138b lim: 40 exec/s: 0 rss: 74Mb L: 11/39 MS: 5 EraseBytes-ChangeByte-ChangeBit-InsertByte-CopyPart- 00:09:59.385 [2024-10-09 01:47:28.919770] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f9 cdw11:f9f9f90c SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:28.919796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.385 #41 NEW cov: 12289 ft: 14295 corp: 8/146b lim: 40 exec/s: 0 rss: 74Mb L: 8/39 MS: 1 ChangeBinInt- 00:09:59.385 [2024-10-09 01:47:28.959905] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f8 cdw11:f9f9fa02 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:28.959929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.385 #42 NEW cov: 12289 ft: 14351 corp: 9/154b lim: 40 exec/s: 0 rss: 74Mb L: 8/39 MS: 1 ChangeBit- 00:09:59.385 [2024-10-09 01:47:29.020532] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:29.020557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.385 [2024-10-09 01:47:29.020613] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:29.020627] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.385 [2024-10-09 01:47:29.020682] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:29.020698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.385 [2024-10-09 01:47:29.020752] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:e6e6e627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.385 [2024-10-09 01:47:29.020765] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:59.643 #43 NEW cov: 12289 ft: 14396 corp: 10/193b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 ChangeBinInt- 00:09:59.643 [2024-10-09 01:47:29.080365] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f93a cdw11:0a9ff9f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.643 [2024-10-09 01:47:29.080392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.643 [2024-10-09 01:47:29.080450] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f9f90cf9 cdw11:e93a0a9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.643 [2024-10-09 01:47:29.080464] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.643 #44 NEW cov: 12289 ft: 14646 corp: 11/212b lim: 40 exec/s: 0 rss: 74Mb L: 19/39 MS: 1 CrossOver- 00:09:59.643 [2024-10-09 01:47:29.140391] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f9 cdw11:f9f9f9f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.643 [2024-10-09 01:47:29.140415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.643 #45 NEW cov: 12289 ft: 14664 corp: 12/220b lim: 40 exec/s: 0 rss: 74Mb L: 8/39 MS: 1 ShuffleBytes- 00:09:59.643 [2024-10-09 01:47:29.180955] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.180982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.644 [2024-10-09 01:47:29.181040] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff00ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.181054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.644 [2024-10-09 01:47:29.181112] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.181126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.644 [2024-10-09 01:47:29.181183] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:e6e6e627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.181197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:59.644 #46 NEW cov: 12289 ft: 14713 corp: 13/259b lim: 40 exec/s: 0 rss: 74Mb L: 39/39 MS: 1 ChangeByte- 00:09:59.644 [2024-10-09 01:47:29.240981] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f8 cdw11:f9f9fada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.241006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.644 [2024-10-09 01:47:29.241062] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.241076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.644 [2024-10-09 01:47:29.241134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.241147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.644 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:09:59.644 #47 NEW cov: 12312 ft: 14934 corp: 14/284b lim: 40 exec/s: 0 rss: 75Mb L: 25/39 MS: 1 InsertRepeatedBytes- 00:09:59.644 [2024-10-09 01:47:29.301249] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:fffffffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.301273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.644 [2024-10-09 01:47:29.301331] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff00ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.301344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.644 [2024-10-09 01:47:29.301398] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.301412] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.644 [2024-10-09 01:47:29.301466] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:e6e6e627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.644 [2024-10-09 01:47:29.301478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:09:59.906 #48 NEW cov: 12312 ft: 14967 corp: 15/323b lim: 40 exec/s: 48 rss: 75Mb L: 39/39 MS: 1 ChangeBit- 00:09:59.906 [2024-10-09 01:47:29.361286] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f9 cdw11:f9f9faff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.361311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.906 [2024-10-09 01:47:29.361369] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.361383] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.906 [2024-10-09 01:47:29.361438] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.361451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.906 #49 NEW cov: 12312 ft: 15008 corp: 16/349b lim: 40 exec/s: 49 rss: 75Mb L: 26/39 MS: 1 InsertRepeatedBytes- 00:09:59.906 [2024-10-09 01:47:29.401067] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f9 cdw11:f9f9ffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.401092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.906 #50 NEW cov: 12312 ft: 15063 corp: 17/361b lim: 40 exec/s: 50 rss: 75Mb L: 12/39 MS: 1 CMP- DE: "\377\377\377\377"- 00:09:59.906 [2024-10-09 01:47:29.461231] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f9 cdw11:0101f9f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.461255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.906 #51 NEW cov: 12312 ft: 15079 corp: 18/369b lim: 40 exec/s: 51 rss: 75Mb L: 8/39 MS: 1 CMP- DE: "\001\001"- 00:09:59.906 [2024-10-09 01:47:29.501379] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3a0a9ff9 cdw11:e93a0a9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.501403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.906 #52 NEW cov: 12312 ft: 15099 corp: 19/383b lim: 40 exec/s: 52 rss: 75Mb L: 14/39 MS: 1 CrossOver- 00:09:59.906 [2024-10-09 01:47:29.562038] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.562063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:09:59.906 [2024-10-09 01:47:29.562119] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.562133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:09:59.906 [2024-10-09 01:47:29.562190] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.562204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:09:59.906 [2024-10-09 01:47:29.562261] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:09:59.906 [2024-10-09 01:47:29.562274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:00.165 #53 NEW cov: 12312 ft: 15118 corp: 20/419b lim: 40 exec/s: 53 rss: 75Mb L: 36/39 MS: 1 ChangeBinInt- 00:10:00.166 [2024-10-09 01:47:29.601786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3a0a9ff9 cdw11:e93a0a9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.601811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.166 [2024-10-09 01:47:29.601878] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:f9e9f900 cdw11:0000fa0a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.601892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.166 #54 NEW cov: 12312 ft: 15137 corp: 21/437b lim: 40 exec/s: 54 rss: 75Mb L: 18/39 MS: 1 CMP- DE: "\000\000\000\372"- 00:10:00.166 [2024-10-09 01:47:29.661844] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:f9f9f90a cdw11:f9f9f9f9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.661867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.166 #55 NEW cov: 12312 ft: 15145 corp: 22/445b lim: 40 exec/s: 55 rss: 75Mb L: 8/39 MS: 1 ShuffleBytes- 00:10:00.166 [2024-10-09 01:47:29.702241] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f8 cdw11:f9f9fada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.702265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.166 [2024-10-09 01:47:29.702323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadada01 cdw11:01dadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.702337] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.166 [2024-10-09 01:47:29.702394] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.702410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.166 #56 NEW cov: 12312 ft: 15158 corp: 23/470b lim: 40 exec/s: 56 rss: 75Mb L: 25/39 MS: 1 PersAutoDict- DE: "\001\001"- 00:10:00.166 [2024-10-09 01:47:29.762407] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2bf9f8 cdw11:f9f9fada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.762432] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.166 [2024-10-09 01:47:29.762491] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.762504] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.166 [2024-10-09 01:47:29.762559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.762573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.166 #57 NEW cov: 12312 ft: 15207 corp: 24/495b lim: 40 exec/s: 57 rss: 75Mb L: 25/39 MS: 1 ChangeByte- 00:10:00.166 [2024-10-09 01:47:29.802198] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3a0a9ff9 cdw11:e9fa0af9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.166 [2024-10-09 01:47:29.802222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.425 #58 NEW cov: 12312 ft: 15213 corp: 25/504b lim: 40 exec/s: 58 rss: 75Mb L: 9/39 MS: 1 EraseBytes- 00:10:00.425 [2024-10-09 01:47:29.862703] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0a2bf9f8 cdw11:f9f9fada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:29.862727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.425 [2024-10-09 01:47:29.862785] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:29.862799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.425 [2024-10-09 01:47:29.862862] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:dadadada cdw11:dadadada SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:29.862875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.425 #59 NEW cov: 12312 ft: 15269 corp: 26/533b lim: 40 exec/s: 59 rss: 75Mb L: 29/39 MS: 1 InsertRepeatedBytes- 00:10:00.425 [2024-10-09 01:47:29.922573] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3a0a0000 cdw11:00fa0a9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:29.922598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.425 #60 NEW cov: 12312 ft: 15270 corp: 27/547b lim: 40 exec/s: 60 rss: 75Mb L: 14/39 MS: 1 PersAutoDict- DE: "\000\000\000\372"- 00:10:00.425 [2024-10-09 01:47:29.962648] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3a0a9ff9 cdw11:54fa0af9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:29.962674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.425 #61 NEW cov: 12312 ft: 15302 corp: 28/556b lim: 40 exec/s: 61 rss: 75Mb L: 9/39 MS: 1 ChangeByte- 00:10:00.425 [2024-10-09 01:47:30.023186] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0affffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:30.023229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.425 #62 NEW cov: 12312 ft: 15396 corp: 29/566b lim: 40 exec/s: 62 rss: 75Mb L: 10/39 MS: 1 CrossOver- 00:10:00.425 [2024-10-09 01:47:30.063323] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:0af9f9f9 cdw11:f9f97afa SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:30.063354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.425 [2024-10-09 01:47:30.063415] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:30.063429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.425 [2024-10-09 01:47:30.063489] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.425 [2024-10-09 01:47:30.063502] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.685 #63 NEW cov: 12312 ft: 15432 corp: 30/593b lim: 40 exec/s: 63 rss: 75Mb L: 27/39 MS: 1 InsertByte- 00:10:00.685 [2024-10-09 01:47:30.123216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3a000000 cdw11:00000af9 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.123245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.685 #65 NEW cov: 12312 ft: 15444 corp: 31/602b lim: 40 exec/s: 65 rss: 75Mb L: 9/39 MS: 2 CrossOver-InsertRepeatedBytes- 00:10:00.685 [2024-10-09 01:47:30.184012] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.184040] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.184100] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.184115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.184174] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.184188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.184247] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:e6e6e627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.184261] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.184319] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:8 nsid:0 cdw10:00002b00 cdw11:ffffff3d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.184333] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:10:00.685 #66 NEW cov: 12312 ft: 15501 corp: 32/642b lim: 40 exec/s: 66 rss: 75Mb L: 40/40 MS: 1 InsertByte- 00:10:00.685 [2024-10-09 01:47:30.223909] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.223935] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.224009] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.224027] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.224080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.224094] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.224150] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:ffffff3d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.224163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:00.685 #72 NEW cov: 12312 ft: 15524 corp: 33/677b lim: 40 exec/s: 72 rss: 76Mb L: 35/40 MS: 1 ChangeASCIIInt- 00:10:00.685 [2024-10-09 01:47:30.284107] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:ffff3aff cdw11:fffffffb SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.284134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.284219] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffff00ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.284233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.284290] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.284303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:00.685 [2024-10-09 01:47:30.284361] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:e6e6e627 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.284374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:00.685 #73 NEW cov: 12312 ft: 15531 corp: 34/716b lim: 40 exec/s: 73 rss: 76Mb L: 39/40 MS: 1 ChangeByte- 00:10:00.685 [2024-10-09 01:47:30.343828] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: SECURITY SEND (81) qid:0 cid:4 nsid:0 cdw10:3a0a0000 cdw11:00fa0a9f SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:00.685 [2024-10-09 01:47:30.343870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:00.945 #74 NEW cov: 12312 ft: 15539 corp: 35/731b lim: 40 exec/s: 37 rss: 76Mb L: 15/40 MS: 1 InsertByte- 00:10:00.945 #74 DONE cov: 12312 ft: 15539 corp: 35/731b lim: 40 exec/s: 37 rss: 76Mb 00:10:00.945 ###### Recommended dictionary. ###### 00:10:00.945 "\377\377\377\377" # Uses: 0 00:10:00.945 "\001\001" # Uses: 1 00:10:00.945 "\000\000\000\372" # Uses: 1 00:10:00.945 ###### End of recommended dictionary. ###### 00:10:00.945 Done 74 runs in 2 second(s) 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_11.conf /var/tmp/suppress_nvmf_fuzz 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 12 1 0x1 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=12 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_12.conf 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 12 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4412 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4412"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:00.946 01:47:30 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4412' -c /tmp/fuzz_json_12.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 -Z 12 00:10:00.946 [2024-10-09 01:47:30.550557] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:00.946 [2024-10-09 01:47:30.550623] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044522 ] 00:10:01.204 [2024-10-09 01:47:30.747427] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.204 [2024-10-09 01:47:30.787757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.204 [2024-10-09 01:47:30.846837] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.204 [2024-10-09 01:47:30.863038] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4412 *** 00:10:01.463 INFO: Running with entropic power schedule (0xFF, 100). 00:10:01.463 INFO: Seed: 2811344425 00:10:01.463 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:01.463 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:01.463 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_12 00:10:01.463 INFO: A corpus is not provided, starting from an empty corpus 00:10:01.463 #2 INITED exec/s: 0 rss: 66Mb 00:10:01.463 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:01.463 This may also happen if the target rejected all inputs we tried so far 00:10:01.463 [2024-10-09 01:47:30.919096] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.463 [2024-10-09 01:47:30.919124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.463 [2024-10-09 01:47:30.919185] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.463 [2024-10-09 01:47:30.919199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:01.463 [2024-10-09 01:47:30.919254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.463 [2024-10-09 01:47:30.919268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:01.463 [2024-10-09 01:47:30.919325] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.463 [2024-10-09 01:47:30.919341] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:01.721 NEW_FUNC[1/715]: 0x44c568 in fuzz_admin_directive_send_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:241 00:10:01.721 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:01.721 #5 NEW cov: 12083 ft: 12081 corp: 2/36b lim: 40 exec/s: 0 rss: 73Mb L: 35/35 MS: 3 ShuffleBytes-ChangeBit-InsertRepeatedBytes- 00:10:01.721 [2024-10-09 01:47:31.239353] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.721 [2024-10-09 01:47:31.239395] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.721 #6 NEW cov: 12196 ft: 13602 corp: 3/51b lim: 40 exec/s: 0 rss: 73Mb L: 15/35 MS: 1 CrossOver- 00:10:01.721 [2024-10-09 01:47:31.289832] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.721 [2024-10-09 01:47:31.289859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.721 [2024-10-09 01:47:31.289913] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a5a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.721 [2024-10-09 01:47:31.289927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:01.721 [2024-10-09 01:47:31.289980] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.722 [2024-10-09 01:47:31.289993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:01.722 [2024-10-09 01:47:31.290046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.722 [2024-10-09 01:47:31.290059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:01.722 #7 NEW cov: 12202 ft: 13778 corp: 4/86b lim: 40 exec/s: 0 rss: 73Mb L: 35/35 MS: 1 ChangeBinInt- 00:10:01.722 [2024-10-09 01:47:31.349511] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.722 [2024-10-09 01:47:31.349536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.980 #8 NEW cov: 12287 ft: 14067 corp: 5/101b lim: 40 exec/s: 0 rss: 73Mb L: 15/35 MS: 1 ChangeBit- 00:10:01.980 [2024-10-09 01:47:31.409720] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.980 [2024-10-09 01:47:31.409745] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.981 #9 NEW cov: 12287 ft: 14147 corp: 6/116b lim: 40 exec/s: 0 rss: 73Mb L: 15/35 MS: 1 ShuffleBytes- 00:10:01.981 [2024-10-09 01:47:31.449794] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.981 [2024-10-09 01:47:31.449826] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.981 #10 NEW cov: 12287 ft: 14223 corp: 7/128b lim: 40 exec/s: 0 rss: 74Mb L: 12/35 MS: 1 CrossOver- 00:10:01.981 [2024-10-09 01:47:31.510134] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a30aa3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.981 [2024-10-09 01:47:31.510161] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.981 [2024-10-09 01:47:31.510216] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.981 [2024-10-09 01:47:31.510229] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:01.981 #11 NEW cov: 12287 ft: 14522 corp: 8/148b lim: 40 exec/s: 0 rss: 74Mb L: 20/35 MS: 1 CrossOver- 00:10:01.981 [2024-10-09 01:47:31.550092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.981 [2024-10-09 01:47:31.550117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.981 #12 NEW cov: 12287 ft: 14623 corp: 9/163b lim: 40 exec/s: 0 rss: 74Mb L: 15/35 MS: 1 ShuffleBytes- 00:10:01.981 [2024-10-09 01:47:31.590176] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a300 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:01.981 [2024-10-09 01:47:31.590202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:01.981 #13 NEW cov: 12287 ft: 14734 corp: 10/178b lim: 40 exec/s: 0 rss: 74Mb L: 15/35 MS: 1 ChangeByte- 00:10:02.239 [2024-10-09 01:47:31.650552] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.239 [2024-10-09 01:47:31.650581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.239 [2024-10-09 01:47:31.650637] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.239 [2024-10-09 01:47:31.650651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.239 #16 NEW cov: 12287 ft: 14784 corp: 11/201b lim: 40 exec/s: 0 rss: 74Mb L: 23/35 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:10:02.239 [2024-10-09 01:47:31.690626] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a30aa3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.239 [2024-10-09 01:47:31.690651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.239 [2024-10-09 01:47:31.690722] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.239 [2024-10-09 01:47:31.690736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.239 #17 NEW cov: 12287 ft: 14808 corp: 12/221b lim: 40 exec/s: 0 rss: 74Mb L: 20/35 MS: 1 ChangeByte- 00:10:02.239 [2024-10-09 01:47:31.750675] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.239 [2024-10-09 01:47:31.750700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.239 #18 NEW cov: 12287 ft: 14816 corp: 13/236b lim: 40 exec/s: 0 rss: 74Mb L: 15/35 MS: 1 ChangeByte- 00:10:02.239 [2024-10-09 01:47:31.790775] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.239 [2024-10-09 01:47:31.790800] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.239 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:02.239 #19 NEW cov: 12310 ft: 14853 corp: 14/248b lim: 40 exec/s: 0 rss: 74Mb L: 12/35 MS: 1 CrossOver- 00:10:02.239 [2024-10-09 01:47:31.850942] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.239 [2024-10-09 01:47:31.850968] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.239 #20 NEW cov: 12310 ft: 14872 corp: 15/261b lim: 40 exec/s: 0 rss: 74Mb L: 13/35 MS: 1 CrossOver- 00:10:02.498 [2024-10-09 01:47:31.911154] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:23a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:31.911180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.498 #21 NEW cov: 12310 ft: 14886 corp: 16/276b lim: 40 exec/s: 21 rss: 74Mb L: 15/35 MS: 1 ChangeByte- 00:10:02.498 [2024-10-09 01:47:31.951635] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a2a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:31.951661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.498 [2024-10-09 01:47:31.951715] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a5a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:31.951728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.498 [2024-10-09 01:47:31.951781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:31.951794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.498 [2024-10-09 01:47:31.951851] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:31.951865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.498 #22 NEW cov: 12310 ft: 14896 corp: 17/311b lim: 40 exec/s: 22 rss: 74Mb L: 35/35 MS: 1 ChangeBinInt- 00:10:02.498 [2024-10-09 01:47:32.011830] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:32.011855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.498 [2024-10-09 01:47:32.011912] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a5a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:32.011925] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.498 [2024-10-09 01:47:32.011979] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:32.011992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.498 [2024-10-09 01:47:32.012046] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:32.012059] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:02.498 #23 NEW cov: 12310 ft: 14923 corp: 18/346b lim: 40 exec/s: 23 rss: 74Mb L: 35/35 MS: 1 ChangeByte- 00:10:02.498 [2024-10-09 01:47:32.051461] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:32.051489] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.498 #24 NEW cov: 12310 ft: 14947 corp: 19/358b lim: 40 exec/s: 24 rss: 74Mb L: 12/35 MS: 1 EraseBytes- 00:10:02.498 [2024-10-09 01:47:32.111655] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa35ba3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.498 [2024-10-09 01:47:32.111680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.498 #25 NEW cov: 12310 ft: 14966 corp: 20/373b lim: 40 exec/s: 25 rss: 74Mb L: 15/35 MS: 1 ChangeBinInt- 00:10:02.757 [2024-10-09 01:47:32.171845] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:0000000e SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.757 [2024-10-09 01:47:32.171870] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.757 #26 NEW cov: 12310 ft: 14975 corp: 21/382b lim: 40 exec/s: 26 rss: 74Mb L: 9/35 MS: 1 EraseBytes- 00:10:02.757 [2024-10-09 01:47:32.231993] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.757 [2024-10-09 01:47:32.232018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.757 #27 NEW cov: 12310 ft: 14993 corp: 22/397b lim: 40 exec/s: 27 rss: 74Mb L: 15/35 MS: 1 CrossOver- 00:10:02.757 [2024-10-09 01:47:32.272073] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a2a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.757 [2024-10-09 01:47:32.272099] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.757 #28 NEW cov: 12310 ft: 15021 corp: 23/405b lim: 40 exec/s: 28 rss: 74Mb L: 8/35 MS: 1 CrossOver- 00:10:02.757 [2024-10-09 01:47:32.332559] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.757 [2024-10-09 01:47:32.332585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.757 [2024-10-09 01:47:32.332639] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a30aa3 cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.757 [2024-10-09 01:47:32.332654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:02.757 [2024-10-09 01:47:32.332709] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.757 [2024-10-09 01:47:32.332723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:02.757 #29 NEW cov: 12310 ft: 15237 corp: 24/431b lim: 40 exec/s: 29 rss: 75Mb L: 26/35 MS: 1 InsertRepeatedBytes- 00:10:02.757 [2024-10-09 01:47:32.372389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:02.757 [2024-10-09 01:47:32.372415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:02.757 #30 NEW cov: 12310 ft: 15259 corp: 25/444b lim: 40 exec/s: 30 rss: 75Mb L: 13/35 MS: 1 CrossOver- 00:10:03.016 [2024-10-09 01:47:32.432970] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.432996] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.016 [2024-10-09 01:47:32.433054] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a5a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.433074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.016 [2024-10-09 01:47:32.433128] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.433141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.016 [2024-10-09 01:47:32.433196] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a323a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.433209] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.016 #31 NEW cov: 12310 ft: 15274 corp: 26/479b lim: 40 exec/s: 31 rss: 75Mb L: 35/35 MS: 1 ChangeBinInt- 00:10:03.016 [2024-10-09 01:47:32.492825] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.492852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.016 [2024-10-09 01:47:32.492908] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:0000a3a3 cdw11:a3000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.492921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.016 #32 NEW cov: 12310 ft: 15283 corp: 27/502b lim: 40 exec/s: 32 rss: 75Mb L: 23/35 MS: 1 CrossOver- 00:10:03.016 [2024-10-09 01:47:32.532948] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.532974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.016 [2024-10-09 01:47:32.533030] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.533044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.016 #33 NEW cov: 12310 ft: 15291 corp: 28/522b lim: 40 exec/s: 33 rss: 75Mb L: 20/35 MS: 1 CopyPart- 00:10:03.016 [2024-10-09 01:47:32.593417] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a30a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.016 [2024-10-09 01:47:32.593441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.017 [2024-10-09 01:47:32.593497] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.017 [2024-10-09 01:47:32.593512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.017 [2024-10-09 01:47:32.593565] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:a3a3a3a3 cdw11:a3a30aa3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.017 [2024-10-09 01:47:32.593579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.017 [2024-10-09 01:47:32.593632] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.017 [2024-10-09 01:47:32.593645] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.017 #34 NEW cov: 12310 ft: 15304 corp: 29/558b lim: 40 exec/s: 34 rss: 75Mb L: 36/36 MS: 1 CopyPart- 00:10:03.017 [2024-10-09 01:47:32.633080] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0a0000a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.017 [2024-10-09 01:47:32.633105] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.017 #35 NEW cov: 12310 ft: 15310 corp: 30/572b lim: 40 exec/s: 35 rss: 75Mb L: 14/36 MS: 1 CMP- DE: "\000\000"- 00:10:03.017 [2024-10-09 01:47:32.673180] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0000007e cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.017 [2024-10-09 01:47:32.673205] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.275 #36 NEW cov: 12310 ft: 15321 corp: 31/585b lim: 40 exec/s: 36 rss: 75Mb L: 13/36 MS: 1 InsertByte- 00:10:03.275 [2024-10-09 01:47:32.713786] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:a3a3a3a3 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.713811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.275 [2024-10-09 01:47:32.713872] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.713886] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.275 [2024-10-09 01:47:32.713939] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:6 nsid:0 cdw10:00000000 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.713952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:03.275 [2024-10-09 01:47:32.714007] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:7 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.714020] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:03.275 #37 NEW cov: 12310 ft: 15330 corp: 32/620b lim: 40 exec/s: 37 rss: 75Mb L: 35/36 MS: 1 CrossOver- 00:10:03.275 [2024-10-09 01:47:32.753547] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a33da3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.753571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.275 [2024-10-09 01:47:32.753628] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.753641] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.275 #38 NEW cov: 12310 ft: 15353 corp: 33/636b lim: 40 exec/s: 38 rss: 75Mb L: 16/36 MS: 1 InsertByte- 00:10:03.275 [2024-10-09 01:47:32.793644] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:41a3a3a3 cdw11:a3a33da3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.793669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.275 [2024-10-09 01:47:32.793726] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:5 nsid:0 cdw10:a3a3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.793739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:03.275 #39 NEW cov: 12310 ft: 15363 corp: 34/652b lim: 40 exec/s: 39 rss: 75Mb L: 16/36 MS: 1 ChangeByte- 00:10:03.275 [2024-10-09 01:47:32.853664] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.275 [2024-10-09 01:47:32.853693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.276 #40 NEW cov: 12310 ft: 15378 corp: 35/667b lim: 40 exec/s: 40 rss: 75Mb L: 15/36 MS: 1 ShuffleBytes- 00:10:03.276 [2024-10-09 01:47:32.893781] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE SEND (19) qid:0 cid:4 nsid:0 cdw10:0aa3a3a3 cdw11:a3a3a3a3 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:03.276 [2024-10-09 01:47:32.893806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:03.276 #41 NEW cov: 12310 ft: 15385 corp: 36/682b lim: 40 exec/s: 20 rss: 75Mb L: 15/36 MS: 1 PersAutoDict- DE: "\000\000"- 00:10:03.276 #41 DONE cov: 12310 ft: 15385 corp: 36/682b lim: 40 exec/s: 20 rss: 75Mb 00:10:03.276 ###### Recommended dictionary. ###### 00:10:03.276 "\000\000" # Uses: 1 00:10:03.276 ###### End of recommended dictionary. ###### 00:10:03.276 Done 41 runs in 2 second(s) 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_12.conf /var/tmp/suppress_nvmf_fuzz 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 13 1 0x1 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=13 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_13.conf 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 13 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4413 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4413"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:03.535 01:47:33 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4413' -c /tmp/fuzz_json_13.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 -Z 13 00:10:03.535 [2024-10-09 01:47:33.075502] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:03.535 [2024-10-09 01:47:33.075568] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4044790 ] 00:10:03.794 [2024-10-09 01:47:33.280417] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.794 [2024-10-09 01:47:33.319216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.794 [2024-10-09 01:47:33.378299] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.794 [2024-10-09 01:47:33.394526] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4413 *** 00:10:03.794 INFO: Running with entropic power schedule (0xFF, 100). 00:10:03.794 INFO: Seed: 1045380081 00:10:03.794 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:03.794 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:03.794 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_13 00:10:03.794 INFO: A corpus is not provided, starting from an empty corpus 00:10:03.794 #2 INITED exec/s: 0 rss: 66Mb 00:10:03.794 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:03.794 This may also happen if the target rejected all inputs we tried so far 00:10:03.794 [2024-10-09 01:47:33.442385] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:03.794 [2024-10-09 01:47:33.442415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.310 NEW_FUNC[1/714]: 0x44e138 in fuzz_admin_directive_receive_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:257 00:10:04.310 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:04.310 #13 NEW cov: 12071 ft: 12045 corp: 2/14b lim: 40 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 InsertRepeatedBytes- 00:10:04.311 [2024-10-09 01:47:33.763274] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.311 [2024-10-09 01:47:33.763320] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.311 #14 NEW cov: 12184 ft: 12660 corp: 3/27b lim: 40 exec/s: 0 rss: 73Mb L: 13/13 MS: 1 ShuffleBytes- 00:10:04.311 [2024-10-09 01:47:33.823288] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.311 [2024-10-09 01:47:33.823316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.311 #15 NEW cov: 12190 ft: 12822 corp: 4/40b lim: 40 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:04.311 [2024-10-09 01:47:33.883420] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.311 [2024-10-09 01:47:33.883445] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.311 #16 NEW cov: 12275 ft: 13181 corp: 5/53b lim: 40 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:10:04.311 [2024-10-09 01:47:33.923530] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.311 [2024-10-09 01:47:33.923556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.311 #17 NEW cov: 12275 ft: 13330 corp: 6/66b lim: 40 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:10:04.311 [2024-10-09 01:47:33.963636] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.311 [2024-10-09 01:47:33.963661] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.569 #18 NEW cov: 12275 ft: 13438 corp: 7/79b lim: 40 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ShuffleBytes- 00:10:04.569 [2024-10-09 01:47:34.023808] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:3f0c0a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.569 [2024-10-09 01:47:34.023839] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.569 #19 NEW cov: 12275 ft: 13580 corp: 8/94b lim: 40 exec/s: 0 rss: 74Mb L: 15/15 MS: 1 CMP- DE: "?\014"- 00:10:04.569 [2024-10-09 01:47:34.064043] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.569 [2024-10-09 01:47:34.064068] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.569 [2024-10-09 01:47:34.064124] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:3f0c0a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.569 [2024-10-09 01:47:34.064138] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:04.569 #20 NEW cov: 12275 ft: 13890 corp: 9/113b lim: 40 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 CrossOver- 00:10:04.569 [2024-10-09 01:47:34.124086] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00007745 cdw11:62062124 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.569 [2024-10-09 01:47:34.124110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.569 #21 NEW cov: 12275 ft: 13910 corp: 10/126b lim: 40 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 CMP- DE: "wEb\006!$'\000"- 00:10:04.569 [2024-10-09 01:47:34.184252] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.569 [2024-10-09 01:47:34.184277] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.569 #22 NEW cov: 12275 ft: 13987 corp: 11/139b lim: 40 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 CopyPart- 00:10:04.569 [2024-10-09 01:47:34.224380] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.569 [2024-10-09 01:47:34.224404] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.828 #23 NEW cov: 12275 ft: 14042 corp: 12/152b lim: 40 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 CrossOver- 00:10:04.828 [2024-10-09 01:47:34.264490] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a00fbff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.828 [2024-10-09 01:47:34.264515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.828 #24 NEW cov: 12275 ft: 14065 corp: 13/165b lim: 40 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 ChangeBinInt- 00:10:04.828 [2024-10-09 01:47:34.304596] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00003f0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.828 [2024-10-09 01:47:34.304623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.828 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:04.828 #25 NEW cov: 12298 ft: 14089 corp: 14/178b lim: 40 exec/s: 0 rss: 74Mb L: 13/19 MS: 1 PersAutoDict- DE: "?\014"- 00:10:04.828 [2024-10-09 01:47:34.344708] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.828 [2024-10-09 01:47:34.344734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.828 #26 NEW cov: 12298 ft: 14112 corp: 15/189b lim: 40 exec/s: 0 rss: 74Mb L: 11/19 MS: 1 EraseBytes- 00:10:04.828 [2024-10-09 01:47:34.384817] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.828 [2024-10-09 01:47:34.384843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.828 #27 NEW cov: 12298 ft: 14130 corp: 16/200b lim: 40 exec/s: 0 rss: 74Mb L: 11/19 MS: 1 ShuffleBytes- 00:10:04.828 [2024-10-09 01:47:34.445025] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:00003f0c SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:04.828 [2024-10-09 01:47:34.445052] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:04.828 #28 NEW cov: 12298 ft: 14177 corp: 17/213b lim: 40 exec/s: 28 rss: 74Mb L: 13/19 MS: 1 CrossOver- 00:10:05.110 [2024-10-09 01:47:34.505285] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a00ff26 cdw11:2421ad22 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.505311] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.110 [2024-10-09 01:47:34.505366] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:59900000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.505379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.110 #29 NEW cov: 12298 ft: 14188 corp: 18/234b lim: 40 exec/s: 29 rss: 74Mb L: 21/21 MS: 1 CMP- DE: "\377&$!\255\"Y\220"- 00:10:05.110 [2024-10-09 01:47:34.565690] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a00ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.565716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.110 [2024-10-09 01:47:34.565788] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.565802] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.110 [2024-10-09 01:47:34.565860] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.565875] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.110 [2024-10-09 01:47:34.565929] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.565943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:05.110 #30 NEW cov: 12298 ft: 14727 corp: 19/273b lim: 40 exec/s: 30 rss: 74Mb L: 39/39 MS: 1 InsertRepeatedBytes- 00:10:05.110 [2024-10-09 01:47:34.625518] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.625543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.110 #31 NEW cov: 12298 ft: 14773 corp: 20/286b lim: 40 exec/s: 31 rss: 74Mb L: 13/39 MS: 1 ChangeBit- 00:10:05.110 [2024-10-09 01:47:34.685671] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.685696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.110 #32 NEW cov: 12298 ft: 14783 corp: 21/299b lim: 40 exec/s: 32 rss: 74Mb L: 13/39 MS: 1 ShuffleBytes- 00:10:05.110 [2024-10-09 01:47:34.725798] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00000a00 cdw11:00000a00 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.110 [2024-10-09 01:47:34.725829] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.110 #33 NEW cov: 12298 ft: 14819 corp: 22/312b lim: 40 exec/s: 33 rss: 74Mb L: 13/39 MS: 1 ChangeBinInt- 00:10:05.369 [2024-10-09 01:47:34.785940] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0068003f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:34.785966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.369 #34 NEW cov: 12298 ft: 14827 corp: 23/326b lim: 40 exec/s: 34 rss: 74Mb L: 14/39 MS: 1 InsertByte- 00:10:05.369 [2024-10-09 01:47:34.846092] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a000000 cdw11:0000ffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:34.846117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.369 #35 NEW cov: 12298 ft: 14839 corp: 24/339b lim: 40 exec/s: 35 rss: 74Mb L: 13/39 MS: 1 CrossOver- 00:10:05.369 [2024-10-09 01:47:34.886224] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a7386b4 cdw11:6e212427 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:34.886249] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.369 #36 NEW cov: 12298 ft: 14857 corp: 25/352b lim: 40 exec/s: 36 rss: 74Mb L: 13/39 MS: 1 CMP- DE: "s\206\264n!$'\000"- 00:10:05.369 [2024-10-09 01:47:34.926330] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a00000d cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:34.926355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.369 #37 NEW cov: 12298 ft: 14900 corp: 26/365b lim: 40 exec/s: 37 rss: 74Mb L: 13/39 MS: 1 ChangeBinInt- 00:10:05.369 [2024-10-09 01:47:34.966443] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:c1f3f5f9 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:34.966468] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.369 #38 NEW cov: 12298 ft: 14921 corp: 27/380b lim: 40 exec/s: 38 rss: 75Mb L: 15/39 MS: 1 ChangeBinInt- 00:10:05.369 [2024-10-09 01:47:35.026983] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:fafafafa cdw11:fafafafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:35.027008] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.369 [2024-10-09 01:47:35.027065] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:fafafafa cdw11:fafafafa SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:35.027078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.369 [2024-10-09 01:47:35.027133] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:fa0a0000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:35.027146] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.369 [2024-10-09 01:47:35.027202] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:00000000 cdw11:003f0c0a SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.369 [2024-10-09 01:47:35.027215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:05.627 #39 NEW cov: 12298 ft: 14946 corp: 28/416b lim: 40 exec/s: 39 rss: 75Mb L: 36/39 MS: 1 InsertRepeatedBytes- 00:10:05.627 [2024-10-09 01:47:35.087143] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a00ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.627 [2024-10-09 01:47:35.087171] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.627 [2024-10-09 01:47:35.087228] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:ffffffff cdw11:fffffdff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.627 [2024-10-09 01:47:35.087241] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.627 [2024-10-09 01:47:35.087296] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.627 [2024-10-09 01:47:35.087309] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.627 [2024-10-09 01:47:35.087364] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.627 [2024-10-09 01:47:35.087377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:05.627 #40 NEW cov: 12298 ft: 14955 corp: 29/455b lim: 40 exec/s: 40 rss: 75Mb L: 39/39 MS: 1 ChangeBit- 00:10:05.627 [2024-10-09 01:47:35.146952] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000702 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.627 [2024-10-09 01:47:35.146977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.627 #41 NEW cov: 12298 ft: 14973 corp: 30/468b lim: 40 exec/s: 41 rss: 75Mb L: 13/39 MS: 1 ChangeBit- 00:10:05.627 [2024-10-09 01:47:35.207077] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:00200a00 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.627 [2024-10-09 01:47:35.207102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.627 #42 NEW cov: 12298 ft: 14981 corp: 31/481b lim: 40 exec/s: 42 rss: 75Mb L: 13/39 MS: 1 ChangeBit- 00:10:05.627 [2024-10-09 01:47:35.247254] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:0000071e SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.627 [2024-10-09 01:47:35.247278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.627 #43 NEW cov: 12298 ft: 14987 corp: 32/496b lim: 40 exec/s: 43 rss: 75Mb L: 15/39 MS: 1 CMP- DE: "\036\000"- 00:10:05.885 [2024-10-09 01:47:35.307389] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:08000000 cdw11:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.885 [2024-10-09 01:47:35.307414] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.885 #44 NEW cov: 12298 ft: 14991 corp: 33/509b lim: 40 exec/s: 44 rss: 75Mb L: 13/39 MS: 1 ChangeBinInt- 00:10:05.885 [2024-10-09 01:47:35.347853] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a00ffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.885 [2024-10-09 01:47:35.347878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.885 [2024-10-09 01:47:35.347936] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:5 nsid:0 cdw10:00272421 cdw11:ad7ffef4 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.885 [2024-10-09 01:47:35.347949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:05.886 [2024-10-09 01:47:35.348004] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:6 nsid:0 cdw10:ffffffff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.886 [2024-10-09 01:47:35.348018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:05.886 [2024-10-09 01:47:35.348079] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:7 nsid:0 cdw10:ffffffff cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.886 [2024-10-09 01:47:35.348092] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:05.886 #45 NEW cov: 12298 ft: 14997 corp: 34/548b lim: 40 exec/s: 45 rss: 75Mb L: 39/39 MS: 1 CMP- DE: "\000'$!\255\177\376\364"- 00:10:05.886 [2024-10-09 01:47:35.407656] nvme_qpair.c: 225:nvme_admin_qpair_print_command: *NOTICE*: DIRECTIVE RECEIVE (1a) qid:0 cid:4 nsid:0 cdw10:0a00fbff cdw11:ffffffff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:05.886 [2024-10-09 01:47:35.407682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:05.886 #46 NEW cov: 12298 ft: 15009 corp: 35/563b lim: 40 exec/s: 23 rss: 75Mb L: 15/39 MS: 1 CrossOver- 00:10:05.886 #46 DONE cov: 12298 ft: 15009 corp: 35/563b lim: 40 exec/s: 23 rss: 75Mb 00:10:05.886 ###### Recommended dictionary. ###### 00:10:05.886 "?\014" # Uses: 1 00:10:05.886 "wEb\006!$'\000" # Uses: 0 00:10:05.886 "\377&$!\255\"Y\220" # Uses: 0 00:10:05.886 "s\206\264n!$'\000" # Uses: 0 00:10:05.886 "\036\000" # Uses: 0 00:10:05.886 "\000'$!\255\177\376\364" # Uses: 0 00:10:05.886 ###### End of recommended dictionary. ###### 00:10:05.886 Done 46 runs in 2 second(s) 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_13.conf /var/tmp/suppress_nvmf_fuzz 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 14 1 0x1 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=14 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_14.conf 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 14 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4414 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4414"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:06.144 01:47:35 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4414' -c /tmp/fuzz_json_14.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 -Z 14 00:10:06.144 [2024-10-09 01:47:35.608283] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:06.144 [2024-10-09 01:47:35.608349] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045102 ] 00:10:06.411 [2024-10-09 01:47:35.813160] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.411 [2024-10-09 01:47:35.852367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.411 [2024-10-09 01:47:35.911489] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.411 [2024-10-09 01:47:35.927705] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4414 *** 00:10:06.411 INFO: Running with entropic power schedule (0xFF, 100). 00:10:06.411 INFO: Seed: 3580367283 00:10:06.411 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:06.411 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:06.411 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_14 00:10:06.411 INFO: A corpus is not provided, starting from an empty corpus 00:10:06.411 #2 INITED exec/s: 0 rss: 66Mb 00:10:06.412 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:06.412 This may also happen if the target rejected all inputs we tried so far 00:10:06.412 [2024-10-09 01:47:36.005990] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.412 [2024-10-09 01:47:36.006031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.412 [2024-10-09 01:47:36.006139] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.412 [2024-10-09 01:47:36.006159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:06.412 [2024-10-09 01:47:36.006272] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.412 [2024-10-09 01:47:36.006292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:06.675 NEW_FUNC[1/715]: 0x44fd08 in fuzz_admin_set_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:392 00:10:06.675 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:06.675 #14 NEW cov: 12047 ft: 12047 corp: 2/23b lim: 35 exec/s: 0 rss: 74Mb L: 22/22 MS: 2 InsertByte-InsertRepeatedBytes- 00:10:06.933 [2024-10-09 01:47:36.346518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.346564] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.933 [2024-10-09 01:47:36.346675] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.346697] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:06.933 [2024-10-09 01:47:36.346798] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.346823] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:06.933 #18 NEW cov: 12184 ft: 12699 corp: 3/50b lim: 35 exec/s: 0 rss: 74Mb L: 27/27 MS: 4 ChangeByte-InsertByte-InsertByte-InsertRepeatedBytes- 00:10:06.933 [2024-10-09 01:47:36.396164] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.396198] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.933 [2024-10-09 01:47:36.396312] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.396330] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:06.933 #19 NEW cov: 12190 ft: 13017 corp: 4/65b lim: 35 exec/s: 0 rss: 74Mb L: 15/27 MS: 1 CrossOver- 00:10:06.933 [2024-10-09 01:47:36.447043] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.447071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:06.933 [2024-10-09 01:47:36.447167] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.447183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:06.933 NEW_FUNC[1/2]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:10:06.933 NEW_FUNC[2/2]: 0x1343908 in nvmf_ctrlr_set_features_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1766 00:10:06.933 #20 NEW cov: 12308 ft: 13359 corp: 5/87b lim: 35 exec/s: 0 rss: 74Mb L: 22/27 MS: 1 CrossOver- 00:10:06.933 [2024-10-09 01:47:36.507838] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.507865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:06.933 [2024-10-09 01:47:36.507963] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.507979] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:06.933 [2024-10-09 01:47:36.508069] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES SOFTWARE PROGRESS MARKER cid:7 cdw10:00000080 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.508085] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:06.933 #21 NEW cov: 12308 ft: 13683 corp: 6/121b lim: 35 exec/s: 0 rss: 74Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:10:06.933 [2024-10-09 01:47:36.577413] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.577440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:06.933 [2024-10-09 01:47:36.577539] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:06.933 [2024-10-09 01:47:36.577555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.192 #22 NEW cov: 12308 ft: 13759 corp: 7/136b lim: 35 exec/s: 0 rss: 74Mb L: 15/34 MS: 1 ShuffleBytes- 00:10:07.192 [2024-10-09 01:47:36.647358] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.647386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.192 #25 NEW cov: 12308 ft: 14455 corp: 8/149b lim: 35 exec/s: 0 rss: 74Mb L: 13/34 MS: 3 ChangeBit-InsertByte-InsertRepeatedBytes- 00:10:07.192 [2024-10-09 01:47:36.698596] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.698622] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.192 [2024-10-09 01:47:36.698715] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.698733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.192 [2024-10-09 01:47:36.698832] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.698862] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.192 #26 NEW cov: 12308 ft: 14525 corp: 9/176b lim: 35 exec/s: 0 rss: 74Mb L: 27/34 MS: 1 ChangeBit- 00:10:07.192 [2024-10-09 01:47:36.768755] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.768780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.192 [2024-10-09 01:47:36.768878] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.768896] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.192 [2024-10-09 01:47:36.768992] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.769009] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.192 #27 NEW cov: 12308 ft: 14547 corp: 10/203b lim: 35 exec/s: 0 rss: 74Mb L: 27/34 MS: 1 ChangeBinInt- 00:10:07.192 [2024-10-09 01:47:36.818996] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.819022] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.192 [2024-10-09 01:47:36.819123] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.819140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.192 [2024-10-09 01:47:36.819235] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.192 [2024-10-09 01:47:36.819253] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.450 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:07.450 #28 NEW cov: 12331 ft: 14673 corp: 11/225b lim: 35 exec/s: 0 rss: 74Mb L: 22/34 MS: 1 ChangeBit- 00:10:07.450 [2024-10-09 01:47:36.889301] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.450 [2024-10-09 01:47:36.889327] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.450 [2024-10-09 01:47:36.889426] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.450 [2024-10-09 01:47:36.889444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.450 #29 NEW cov: 12331 ft: 14688 corp: 12/247b lim: 35 exec/s: 0 rss: 74Mb L: 22/34 MS: 1 ChangeBinInt- 00:10:07.450 [2024-10-09 01:47:36.938612] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.450 [2024-10-09 01:47:36.938639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.450 #30 NEW cov: 12331 ft: 14735 corp: 13/260b lim: 35 exec/s: 30 rss: 75Mb L: 13/34 MS: 1 ChangeBit- 00:10:07.450 [2024-10-09 01:47:37.009005] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.450 [2024-10-09 01:47:37.009034] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.450 #31 NEW cov: 12331 ft: 14746 corp: 14/273b lim: 35 exec/s: 31 rss: 75Mb L: 13/34 MS: 1 ChangeByte- 00:10:07.450 [2024-10-09 01:47:37.079879] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.450 [2024-10-09 01:47:37.079905] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.450 [2024-10-09 01:47:37.080014] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.450 [2024-10-09 01:47:37.080033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.450 [2024-10-09 01:47:37.080135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000001b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.450 [2024-10-09 01:47:37.080154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.707 #32 NEW cov: 12331 ft: 14778 corp: 15/300b lim: 35 exec/s: 32 rss: 75Mb L: 27/34 MS: 1 ChangeBinInt- 00:10:07.708 [2024-10-09 01:47:37.150081] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.150108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.708 [2024-10-09 01:47:37.150204] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.150224] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.708 [2024-10-09 01:47:37.150316] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000001b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.150332] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.708 #33 NEW cov: 12331 ft: 14793 corp: 16/327b lim: 35 exec/s: 33 rss: 75Mb L: 27/34 MS: 1 ChangeBit- 00:10:07.708 [2024-10-09 01:47:37.220468] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000059 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.220496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.708 [2024-10-09 01:47:37.220594] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.220611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.708 #34 NEW cov: 12331 ft: 14821 corp: 17/349b lim: 35 exec/s: 34 rss: 75Mb L: 22/34 MS: 1 CMP- DE: "\001'$\"\254Y\203\332"- 00:10:07.708 [2024-10-09 01:47:37.290416] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES WRITE ATOMICITY cid:4 cdw10:8000000a SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.290444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.708 [2024-10-09 01:47:37.290534] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ba SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.290553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.708 #40 NEW cov: 12331 ft: 14863 corp: 18/369b lim: 35 exec/s: 40 rss: 75Mb L: 20/34 MS: 1 InsertRepeatedBytes- 00:10:07.708 [2024-10-09 01:47:37.340555] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.340581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.708 [2024-10-09 01:47:37.340680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.708 [2024-10-09 01:47:37.340696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.966 #41 NEW cov: 12331 ft: 14874 corp: 19/384b lim: 35 exec/s: 41 rss: 75Mb L: 15/34 MS: 1 ChangeBinInt- 00:10:07.966 [2024-10-09 01:47:37.411274] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.411300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.411392] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.411410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.411507] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:80000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.411527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.966 #42 NEW cov: 12331 ft: 14922 corp: 20/406b lim: 35 exec/s: 42 rss: 75Mb L: 22/34 MS: 1 ChangeBinInt- 00:10:07.966 [2024-10-09 01:47:37.481985] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.482012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.482104] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.482124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.482221] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.482238] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.482353] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:8000001b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.482372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:07.966 #43 NEW cov: 12331 ft: 15097 corp: 21/434b lim: 35 exec/s: 43 rss: 75Mb L: 28/34 MS: 1 InsertByte- 00:10:07.966 [2024-10-09 01:47:37.531680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.531706] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.531796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.531819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.531917] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.531934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:07.966 #44 NEW cov: 12331 ft: 15135 corp: 22/456b lim: 35 exec/s: 44 rss: 75Mb L: 22/34 MS: 1 ChangeByte- 00:10:07.966 [2024-10-09 01:47:37.581225] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.581255] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.966 #45 NEW cov: 12331 ft: 15143 corp: 23/469b lim: 35 exec/s: 45 rss: 75Mb L: 13/34 MS: 1 ChangeBit- 00:10:07.966 [2024-10-09 01:47:37.632165] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.632190] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.632287] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.632304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:07.966 [2024-10-09 01:47:37.632396] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:07.966 [2024-10-09 01:47:37.632411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:08.225 #46 NEW cov: 12331 ft: 15159 corp: 24/492b lim: 35 exec/s: 46 rss: 75Mb L: 23/34 MS: 1 CrossOver- 00:10:08.225 [2024-10-09 01:47:37.702360] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.225 [2024-10-09 01:47:37.702389] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.225 #47 NEW cov: 12331 ft: 15162 corp: 25/509b lim: 35 exec/s: 47 rss: 75Mb L: 17/34 MS: 1 EraseBytes- 00:10:08.225 [2024-10-09 01:47:37.773581] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.225 [2024-10-09 01:47:37.773607] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.225 [2024-10-09 01:47:37.773705] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.225 [2024-10-09 01:47:37.773721] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.225 [2024-10-09 01:47:37.773818] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.225 [2024-10-09 01:47:37.773865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:08.225 [2024-10-09 01:47:37.773962] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.225 [2024-10-09 01:47:37.773980] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:08.225 #48 NEW cov: 12331 ft: 15179 corp: 26/540b lim: 35 exec/s: 48 rss: 75Mb L: 31/34 MS: 1 CopyPart- 00:10:08.225 [2024-10-09 01:47:37.823518] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.225 [2024-10-09 01:47:37.823544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.225 [2024-10-09 01:47:37.823631] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:80000022 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.225 [2024-10-09 01:47:37.823650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.225 #49 NEW cov: 12331 ft: 15194 corp: 27/555b lim: 35 exec/s: 49 rss: 75Mb L: 15/34 MS: 1 PersAutoDict- DE: "\001'$\"\254Y\203\332"- 00:10:08.484 [2024-10-09 01:47:37.894337] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.894366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.894463] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.894480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.894582] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:0000001b SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.894600] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.894689] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.894709] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:08.484 #50 NEW cov: 12331 ft: 15233 corp: 28/589b lim: 35 exec/s: 50 rss: 75Mb L: 34/34 MS: 1 InsertRepeatedBytes- 00:10:08.484 [2024-10-09 01:47:37.944431] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:0000005d SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.944456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.944549] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.944569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.944662] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.944677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.944770] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:800000ff SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.944788] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: FEATURE ID NOT SAVEABLE (01/0d) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:08.484 #51 NEW cov: 12331 ft: 15237 corp: 29/621b lim: 35 exec/s: 51 rss: 75Mb L: 32/34 MS: 1 CopyPart- 00:10:08.484 [2024-10-09 01:47:37.994841] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:4 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.994866] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.994971] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:5 cdw10:00000024 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.994987] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.995085] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:6 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.995102] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:08.484 [2024-10-09 01:47:37.995187] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES RESERVED cid:7 cdw10:00000000 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:08.484 [2024-10-09 01:47:37.995203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:08.484 #52 NEW cov: 12331 ft: 15251 corp: 30/651b lim: 35 exec/s: 26 rss: 75Mb L: 30/34 MS: 1 PersAutoDict- DE: "\001'$\"\254Y\203\332"- 00:10:08.484 #52 DONE cov: 12331 ft: 15251 corp: 30/651b lim: 35 exec/s: 26 rss: 75Mb 00:10:08.484 ###### Recommended dictionary. ###### 00:10:08.484 "\001'$\"\254Y\203\332" # Uses: 2 00:10:08.484 ###### End of recommended dictionary. ###### 00:10:08.484 Done 52 runs in 2 second(s) 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_14.conf /var/tmp/suppress_nvmf_fuzz 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 15 1 0x1 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=15 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_15.conf 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 15 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4415 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4415"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:08.484 01:47:38 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4415' -c /tmp/fuzz_json_15.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 -Z 15 00:10:08.743 [2024-10-09 01:47:38.169758] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:08.743 [2024-10-09 01:47:38.169830] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045456 ] 00:10:08.743 [2024-10-09 01:47:38.367495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.743 [2024-10-09 01:47:38.409149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.001 [2024-10-09 01:47:38.468206] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.001 [2024-10-09 01:47:38.484418] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4415 *** 00:10:09.001 INFO: Running with entropic power schedule (0xFF, 100). 00:10:09.001 INFO: Seed: 1840407032 00:10:09.001 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:09.001 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:09.001 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_15 00:10:09.001 INFO: A corpus is not provided, starting from an empty corpus 00:10:09.001 #2 INITED exec/s: 0 rss: 66Mb 00:10:09.001 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:09.001 This may also happen if the target rejected all inputs we tried so far 00:10:09.001 [2024-10-09 01:47:38.555853] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.001 [2024-10-09 01:47:38.555902] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.001 [2024-10-09 01:47:38.556027] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.001 [2024-10-09 01:47:38.556046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:09.259 NEW_FUNC[1/714]: 0x451248 in fuzz_admin_get_features_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:460 00:10:09.259 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:09.259 #13 NEW cov: 12024 ft: 12021 corp: 2/20b lim: 35 exec/s: 0 rss: 73Mb L: 19/19 MS: 1 InsertRepeatedBytes- 00:10:09.259 [2024-10-09 01:47:38.906334] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.259 [2024-10-09 01:47:38.906393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.259 [2024-10-09 01:47:38.906515] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.259 [2024-10-09 01:47:38.906538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:09.517 #14 NEW cov: 12166 ft: 12671 corp: 3/39b lim: 35 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 ChangeBinInt- 00:10:09.517 [2024-10-09 01:47:38.976599] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.517 [2024-10-09 01:47:38.976626] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:09.517 NEW_FUNC[1/1]: 0x46a6e8 in feat_arbitration /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:273 00:10:09.517 #15 NEW cov: 12210 ft: 13202 corp: 4/58b lim: 35 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 ChangeBit- 00:10:09.517 [2024-10-09 01:47:39.046361] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.517 [2024-10-09 01:47:39.046388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.517 #16 NEW cov: 12295 ft: 13597 corp: 5/68b lim: 35 exec/s: 0 rss: 74Mb L: 10/19 MS: 1 CrossOver- 00:10:09.517 [2024-10-09 01:47:39.096473] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.517 [2024-10-09 01:47:39.096499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.517 #22 NEW cov: 12295 ft: 13744 corp: 6/78b lim: 35 exec/s: 0 rss: 74Mb L: 10/19 MS: 1 ShuffleBytes- 00:10:09.517 [2024-10-09 01:47:39.167032] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.517 [2024-10-09 01:47:39.167058] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.776 NEW_FUNC[1/3]: 0x46c9d8 in feat_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:295 00:10:09.776 NEW_FUNC[2/3]: 0x1331838 in nvmf_ctrlr_get_features_temperature_threshold /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr.c:1687 00:10:09.776 #23 NEW cov: 12350 ft: 13828 corp: 7/97b lim: 35 exec/s: 0 rss: 74Mb L: 19/19 MS: 1 ChangeBit- 00:10:09.776 [2024-10-09 01:47:39.227379] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.776 [2024-10-09 01:47:39.227405] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:09.776 #24 NEW cov: 12350 ft: 13888 corp: 8/117b lim: 35 exec/s: 0 rss: 74Mb L: 20/20 MS: 1 InsertByte- 00:10:09.776 [2024-10-09 01:47:39.297609] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.776 [2024-10-09 01:47:39.297636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.776 #25 NEW cov: 12350 ft: 13929 corp: 9/136b lim: 35 exec/s: 0 rss: 74Mb L: 19/20 MS: 1 ShuffleBytes- 00:10:09.776 [2024-10-09 01:47:39.367454] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.776 [2024-10-09 01:47:39.367480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:09.776 #26 NEW cov: 12350 ft: 13945 corp: 10/147b lim: 35 exec/s: 0 rss: 75Mb L: 11/20 MS: 1 CopyPart- 00:10:09.776 [2024-10-09 01:47:39.438115] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:09.776 [2024-10-09 01:47:39.438141] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.034 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:10.034 #28 NEW cov: 12373 ft: 14025 corp: 11/167b lim: 35 exec/s: 0 rss: 75Mb L: 20/20 MS: 2 ChangeBit-CrossOver- 00:10:10.034 [2024-10-09 01:47:39.488676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.034 [2024-10-09 01:47:39.488704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.034 [2024-10-09 01:47:39.488804] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000000fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.034 [2024-10-09 01:47:39.488825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:10.034 #29 NEW cov: 12373 ft: 14133 corp: 12/188b lim: 35 exec/s: 29 rss: 75Mb L: 21/21 MS: 1 InsertByte- 00:10:10.034 [2024-10-09 01:47:39.558185] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.034 [2024-10-09 01:47:39.558212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.034 #30 NEW cov: 12373 ft: 14198 corp: 13/199b lim: 35 exec/s: 30 rss: 75Mb L: 11/21 MS: 1 CopyPart- 00:10:10.034 [2024-10-09 01:47:39.628725] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.034 [2024-10-09 01:47:39.628751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.034 [2024-10-09 01:47:39.628857] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.034 [2024-10-09 01:47:39.628873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.034 #31 NEW cov: 12373 ft: 14237 corp: 14/218b lim: 35 exec/s: 31 rss: 75Mb L: 19/21 MS: 1 CopyPart- 00:10:10.034 [2024-10-09 01:47:39.679701] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.034 [2024-10-09 01:47:39.679729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.034 [2024-10-09 01:47:39.679824] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.034 [2024-10-09 01:47:39.679844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:10.034 [2024-10-09 01:47:39.679939] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.034 [2024-10-09 01:47:39.679955] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:10.292 #32 NEW cov: 12373 ft: 14678 corp: 15/252b lim: 35 exec/s: 32 rss: 75Mb L: 34/34 MS: 1 CrossOver- 00:10:10.292 [2024-10-09 01:47:39.729124] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.729151] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.292 [2024-10-09 01:47:39.729263] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.729280] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.292 #33 NEW cov: 12373 ft: 14725 corp: 16/271b lim: 35 exec/s: 33 rss: 75Mb L: 19/34 MS: 1 ShuffleBytes- 00:10:10.292 [2024-10-09 01:47:39.779447] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.779475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.292 [2024-10-09 01:47:39.779579] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.779596] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.292 #34 NEW cov: 12373 ft: 14732 corp: 17/290b lim: 35 exec/s: 34 rss: 75Mb L: 19/34 MS: 1 CrossOver- 00:10:10.292 [2024-10-09 01:47:39.829485] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.829515] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.292 [2024-10-09 01:47:39.829617] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.829636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.292 #35 NEW cov: 12373 ft: 14756 corp: 18/304b lim: 35 exec/s: 35 rss: 75Mb L: 14/34 MS: 1 InsertRepeatedBytes- 00:10:10.292 [2024-10-09 01:47:39.879686] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.879716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.292 #36 NEW cov: 12373 ft: 14791 corp: 19/315b lim: 35 exec/s: 36 rss: 75Mb L: 11/34 MS: 1 InsertByte- 00:10:10.292 [2024-10-09 01:47:39.930659] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.930687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.292 [2024-10-09 01:47:39.930785] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000001fb SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.292 [2024-10-09 01:47:39.930801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:10.550 #37 NEW cov: 12373 ft: 14805 corp: 20/336b lim: 35 exec/s: 37 rss: 75Mb L: 21/34 MS: 1 ChangeByte- 00:10:10.550 [2024-10-09 01:47:40.000255] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.550 [2024-10-09 01:47:40.000284] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.550 #38 NEW cov: 12373 ft: 14837 corp: 21/347b lim: 35 exec/s: 38 rss: 75Mb L: 11/34 MS: 1 InsertByte- 00:10:10.550 [2024-10-09 01:47:40.051651] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.550 [2024-10-09 01:47:40.051680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.550 [2024-10-09 01:47:40.051762] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.550 [2024-10-09 01:47:40.051778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:10.550 [2024-10-09 01:47:40.051875] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:000007ee SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.550 [2024-10-09 01:47:40.051891] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:10.550 NEW_FUNC[1/1]: 0x471258 in feat_write_atomicity /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:340 00:10:10.550 #39 NEW cov: 12387 ft: 14872 corp: 22/375b lim: 35 exec/s: 39 rss: 75Mb L: 28/34 MS: 1 InsertRepeatedBytes- 00:10:10.550 [2024-10-09 01:47:40.101160] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.550 [2024-10-09 01:47:40.101188] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.550 [2024-10-09 01:47:40.101290] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.550 [2024-10-09 01:47:40.101307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.550 #40 NEW cov: 12387 ft: 14946 corp: 23/395b lim: 35 exec/s: 40 rss: 75Mb L: 20/34 MS: 1 InsertByte- 00:10:10.550 [2024-10-09 01:47:40.171678] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.550 [2024-10-09 01:47:40.171705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.550 [2024-10-09 01:47:40.171796] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.550 [2024-10-09 01:47:40.171811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.550 #41 NEW cov: 12387 ft: 14954 corp: 24/409b lim: 35 exec/s: 41 rss: 75Mb L: 14/34 MS: 1 CrossOver- 00:10:10.809 [2024-10-09 01:47:40.242442] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.242469] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.809 [2024-10-09 01:47:40.242560] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.242577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.809 [2024-10-09 01:47:40.242676] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:000007ff SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.242691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:10.809 [2024-10-09 01:47:40.242784] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:7 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.242801] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:10:10.809 #42 NEW cov: 12387 ft: 15116 corp: 25/439b lim: 35 exec/s: 42 rss: 75Mb L: 30/34 MS: 1 InsertRepeatedBytes- 00:10:10.809 [2024-10-09 01:47:40.312523] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.312550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.809 [2024-10-09 01:47:40.312641] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:6 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.312658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:10:10.809 #43 NEW cov: 12387 ft: 15143 corp: 26/464b lim: 35 exec/s: 43 rss: 75Mb L: 25/34 MS: 1 CrossOver- 00:10:10.809 [2024-10-09 01:47:40.362588] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.362615] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.809 [2024-10-09 01:47:40.362709] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.362726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.809 #44 NEW cov: 12387 ft: 15155 corp: 27/483b lim: 35 exec/s: 44 rss: 75Mb L: 19/34 MS: 1 ChangeBinInt- 00:10:10.809 [2024-10-09 01:47:40.413148] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000700 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.413175] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:10.809 [2024-10-09 01:47:40.413270] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:10.809 [2024-10-09 01:47:40.413288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:10.809 #45 NEW cov: 12387 ft: 15161 corp: 28/503b lim: 35 exec/s: 45 rss: 75Mb L: 20/34 MS: 1 InsertByte- 00:10:11.067 [2024-10-09 01:47:40.483562] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.067 [2024-10-09 01:47:40.483589] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:11.067 [2024-10-09 01:47:40.483680] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.067 [2024-10-09 01:47:40.483696] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:11.067 #46 NEW cov: 12387 ft: 15180 corp: 29/522b lim: 35 exec/s: 46 rss: 75Mb L: 19/34 MS: 1 ChangeBit- 00:10:11.067 [2024-10-09 01:47:40.534002] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:4 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.067 [2024-10-09 01:47:40.534030] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:10:11.067 [2024-10-09 01:47:40.534135] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES RESERVED cid:5 cdw10:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.067 [2024-10-09 01:47:40.534153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID FIELD (00/02) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:10:11.067 #47 NEW cov: 12387 ft: 15188 corp: 30/541b lim: 35 exec/s: 23 rss: 75Mb L: 19/34 MS: 1 CMP- DE: "\001\000\000\000\000\000\004\000"- 00:10:11.067 #47 DONE cov: 12387 ft: 15188 corp: 30/541b lim: 35 exec/s: 23 rss: 75Mb 00:10:11.067 ###### Recommended dictionary. ###### 00:10:11.067 "\001\000\000\000\000\000\004\000" # Uses: 0 00:10:11.067 ###### End of recommended dictionary. ###### 00:10:11.067 Done 47 runs in 2 second(s) 00:10:11.067 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_15.conf /var/tmp/suppress_nvmf_fuzz 00:10:11.067 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:11.067 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:11.067 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 16 1 0x1 00:10:11.067 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=16 00:10:11.067 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:11.067 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:11.067 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_16.conf 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 16 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4416 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4416"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:11.068 01:47:40 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4416' -c /tmp/fuzz_json_16.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 -Z 16 00:10:11.068 [2024-10-09 01:47:40.710474] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:11.068 [2024-10-09 01:47:40.710539] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4045815 ] 00:10:11.326 [2024-10-09 01:47:40.918757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.326 [2024-10-09 01:47:40.959173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.585 [2024-10-09 01:47:41.018733] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.585 [2024-10-09 01:47:41.034944] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4416 *** 00:10:11.585 INFO: Running with entropic power schedule (0xFF, 100). 00:10:11.585 INFO: Seed: 96443347 00:10:11.585 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:11.585 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:11.585 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_16 00:10:11.585 INFO: A corpus is not provided, starting from an empty corpus 00:10:11.585 #2 INITED exec/s: 0 rss: 66Mb 00:10:11.585 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:11.585 This may also happen if the target rejected all inputs we tried so far 00:10:11.585 [2024-10-09 01:47:41.084578] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.585 [2024-10-09 01:47:41.084609] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:11.585 [2024-10-09 01:47:41.084667] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.585 [2024-10-09 01:47:41.084683] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:11.585 [2024-10-09 01:47:41.084736] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.585 [2024-10-09 01:47:41.084751] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:11.585 [2024-10-09 01:47:41.084804] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.585 [2024-10-09 01:47:41.084824] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:11.843 NEW_FUNC[1/715]: 0x452708 in fuzz_nvm_read_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:519 00:10:11.843 NEW_FUNC[2/715]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:11.843 #9 NEW cov: 12157 ft: 12155 corp: 2/100b lim: 105 exec/s: 0 rss: 73Mb L: 99/99 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:10:11.843 [2024-10-09 01:47:41.425467] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.843 [2024-10-09 01:47:41.425506] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:11.843 [2024-10-09 01:47:41.425557] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.843 [2024-10-09 01:47:41.425572] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:11.843 [2024-10-09 01:47:41.425624] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.843 [2024-10-09 01:47:41.425639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:11.843 [2024-10-09 01:47:41.425690] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644394 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.843 [2024-10-09 01:47:41.425705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:11.843 #10 NEW cov: 12270 ft: 12653 corp: 3/200b lim: 105 exec/s: 0 rss: 73Mb L: 100/100 MS: 1 InsertByte- 00:10:11.843 [2024-10-09 01:47:41.485346] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.843 [2024-10-09 01:47:41.485375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:11.843 [2024-10-09 01:47:41.485412] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.843 [2024-10-09 01:47:41.485427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:11.843 [2024-10-09 01:47:41.485479] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:11.843 [2024-10-09 01:47:41.485494] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.101 #11 NEW cov: 12276 ft: 13414 corp: 4/279b lim: 105 exec/s: 0 rss: 73Mb L: 79/100 MS: 1 EraseBytes- 00:10:12.101 [2024-10-09 01:47:41.525528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.525557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.525621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.525637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.525688] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.525702] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.525754] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446509601776975871 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.525769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.101 #12 NEW cov: 12361 ft: 13695 corp: 5/382b lim: 105 exec/s: 0 rss: 73Mb L: 103/103 MS: 1 InsertRepeatedBytes- 00:10:12.101 [2024-10-09 01:47:41.585348] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:235798528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.585377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.101 #17 NEW cov: 12361 ft: 14255 corp: 6/413b lim: 105 exec/s: 0 rss: 73Mb L: 31/103 MS: 5 ChangeBit-CopyPart-CopyPart-ShuffleBytes-InsertRepeatedBytes- 00:10:12.101 [2024-10-09 01:47:41.625800] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.625830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.625903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.625919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.625984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.625999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.626049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.626064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.101 #18 NEW cov: 12361 ft: 14380 corp: 7/512b lim: 105 exec/s: 0 rss: 73Mb L: 99/103 MS: 1 ChangeBinInt- 00:10:12.101 [2024-10-09 01:47:41.665913] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.665940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.665989] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.666007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.666056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.666071] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.666124] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.666139] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.101 #19 NEW cov: 12361 ft: 14524 corp: 8/613b lim: 105 exec/s: 0 rss: 73Mb L: 101/103 MS: 1 CopyPart- 00:10:12.101 [2024-10-09 01:47:41.705943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.705969] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.101 [2024-10-09 01:47:41.706015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.101 [2024-10-09 01:47:41.706031] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.102 [2024-10-09 01:47:41.706081] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.102 [2024-10-09 01:47:41.706095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.102 #20 NEW cov: 12361 ft: 14554 corp: 9/692b lim: 105 exec/s: 0 rss: 74Mb L: 79/103 MS: 1 ChangeBinInt- 00:10:12.102 [2024-10-09 01:47:41.766265] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.102 [2024-10-09 01:47:41.766297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.102 [2024-10-09 01:47:41.766360] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.102 [2024-10-09 01:47:41.766378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.102 [2024-10-09 01:47:41.766429] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.102 [2024-10-09 01:47:41.766443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.102 [2024-10-09 01:47:41.766497] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.102 [2024-10-09 01:47:41.766512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.360 #24 NEW cov: 12361 ft: 14585 corp: 10/781b lim: 105 exec/s: 0 rss: 74Mb L: 89/103 MS: 4 ChangeBit-ShuffleBytes-ChangeByte-CrossOver- 00:10:12.360 [2024-10-09 01:47:41.806312] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.806340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.806393] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.806410] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.806461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:29888 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.806476] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.806526] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.806542] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.360 #25 NEW cov: 12361 ft: 14639 corp: 11/883b lim: 105 exec/s: 0 rss: 74Mb L: 102/103 MS: 1 InsertByte- 00:10:12.360 [2024-10-09 01:47:41.866347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.866373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.866420] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.866436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.866487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.866503] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.360 #26 NEW cov: 12361 ft: 14656 corp: 12/962b lim: 105 exec/s: 0 rss: 74Mb L: 79/103 MS: 1 ChangeByte- 00:10:12.360 [2024-10-09 01:47:41.926691] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009052745663 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.926717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.926788] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.926804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.926858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.926874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.926925] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644394 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.926940] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.360 #27 NEW cov: 12361 ft: 14743 corp: 13/1062b lim: 105 exec/s: 0 rss: 74Mb L: 100/103 MS: 1 ChangeBit- 00:10:12.360 [2024-10-09 01:47:41.966750] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968439 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.966778] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.966838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.966855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.966918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.966933] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:41.966984] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:41.966999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.360 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:12.360 #28 NEW cov: 12384 ft: 14840 corp: 14/1161b lim: 105 exec/s: 0 rss: 74Mb L: 99/103 MS: 1 ChangeBit- 00:10:12.360 [2024-10-09 01:47:42.006891] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009052745663 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:42.006918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:42.006966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:42.006982] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:42.007032] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:42.007046] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.360 [2024-10-09 01:47:42.007097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644394 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.360 [2024-10-09 01:47:42.007112] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.618 #29 NEW cov: 12384 ft: 14900 corp: 15/1262b lim: 105 exec/s: 0 rss: 74Mb L: 101/103 MS: 1 InsertByte- 00:10:12.618 [2024-10-09 01:47:42.067028] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816972562359369663 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.067056] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.618 [2024-10-09 01:47:42.067103] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.067118] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.618 [2024-10-09 01:47:42.067169] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.067183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.618 [2024-10-09 01:47:42.067234] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.067248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.618 #30 NEW cov: 12384 ft: 14909 corp: 16/1361b lim: 105 exec/s: 30 rss: 74Mb L: 99/103 MS: 1 ChangeByte- 00:10:12.618 [2024-10-09 01:47:42.106789] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:235798528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.106821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.618 #31 NEW cov: 12384 ft: 14941 corp: 17/1390b lim: 105 exec/s: 31 rss: 74Mb L: 29/103 MS: 1 EraseBytes- 00:10:12.618 [2024-10-09 01:47:42.167188] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.167214] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.618 [2024-10-09 01:47:42.167259] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.167274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.618 [2024-10-09 01:47:42.167327] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.167343] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.618 #32 NEW cov: 12384 ft: 14984 corp: 18/1469b lim: 105 exec/s: 32 rss: 74Mb L: 79/103 MS: 1 ShuffleBytes- 00:10:12.618 [2024-10-09 01:47:42.227507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816972562359369663 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.227535] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.618 [2024-10-09 01:47:42.227582] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.227598] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.618 [2024-10-09 01:47:42.227649] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.227663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.618 [2024-10-09 01:47:42.227715] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.618 [2024-10-09 01:47:42.227730] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.618 #33 NEW cov: 12384 ft: 14986 corp: 19/1558b lim: 105 exec/s: 33 rss: 74Mb L: 89/103 MS: 1 EraseBytes- 00:10:12.877 [2024-10-09 01:47:42.287808] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.287848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.287909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.287928] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.287979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.287997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.288049] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.288064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.288117] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.288131] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:12.877 #34 NEW cov: 12384 ft: 15056 corp: 20/1663b lim: 105 exec/s: 34 rss: 74Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:10:12.877 [2024-10-09 01:47:42.327397] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:235798528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.327425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.877 #35 NEW cov: 12384 ft: 15084 corp: 21/1696b lim: 105 exec/s: 35 rss: 74Mb L: 33/105 MS: 1 CMP- DE: "\000\002\000\000"- 00:10:12.877 [2024-10-09 01:47:42.387528] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973011556829887 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.387557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.877 #39 NEW cov: 12384 ft: 15105 corp: 22/1724b lim: 105 exec/s: 39 rss: 74Mb L: 28/105 MS: 4 ChangeByte-ChangeBinInt-InsertByte-CrossOver- 00:10:12.877 [2024-10-09 01:47:42.428099] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.428130] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.428180] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.428197] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.428251] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.428268] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.428323] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.428339] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:12.877 #40 NEW cov: 12384 ft: 15128 corp: 23/1825b lim: 105 exec/s: 40 rss: 74Mb L: 101/105 MS: 1 ShuffleBytes- 00:10:12.877 [2024-10-09 01:47:42.467771] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:68955275264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.467799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.877 #41 NEW cov: 12384 ft: 15152 corp: 24/1854b lim: 105 exec/s: 41 rss: 74Mb L: 29/105 MS: 1 ChangeBit- 00:10:12.877 [2024-10-09 01:47:42.508139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.508170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.508206] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.508221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:12.877 [2024-10-09 01:47:42.508271] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012066353087 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:12.877 [2024-10-09 01:47:42.508287] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:13.135 #47 NEW cov: 12384 ft: 15165 corp: 25/1933b lim: 105 exec/s: 47 rss: 74Mb L: 79/105 MS: 1 ChangeByte- 00:10:13.135 [2024-10-09 01:47:42.568506] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.135 [2024-10-09 01:47:42.568537] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.135 [2024-10-09 01:47:42.568589] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072612287 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.135 [2024-10-09 01:47:42.568605] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.135 [2024-10-09 01:47:42.568656] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.135 [2024-10-09 01:47:42.568671] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:13.135 [2024-10-09 01:47:42.568724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.568739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.568791] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.568807] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:13.136 #48 NEW cov: 12384 ft: 15205 corp: 26/2038b lim: 105 exec/s: 48 rss: 74Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:10:13.136 [2024-10-09 01:47:42.628317] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.628345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.628383] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.628398] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.136 #49 NEW cov: 12384 ft: 15492 corp: 27/2095b lim: 105 exec/s: 49 rss: 74Mb L: 57/105 MS: 1 EraseBytes- 00:10:13.136 [2024-10-09 01:47:42.668672] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.668699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.668747] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.668766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.668822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.668836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.668889] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.668904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:13.136 #50 NEW cov: 12384 ft: 15506 corp: 28/2194b lim: 105 exec/s: 50 rss: 74Mb L: 99/105 MS: 1 ChangeByte- 00:10:13.136 [2024-10-09 01:47:42.708784] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.708817] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.708871] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.708887] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.708939] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.708954] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.709006] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:18446509601776975871 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.709021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:13.136 #51 NEW cov: 12384 ft: 15558 corp: 29/2297b lim: 105 exec/s: 51 rss: 75Mb L: 103/105 MS: 1 ShuffleBytes- 00:10:13.136 [2024-10-09 01:47:42.768839] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.768867] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.768914] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.768929] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.136 [2024-10-09 01:47:42.768997] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.136 [2024-10-09 01:47:42.769013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:13.136 #52 NEW cov: 12384 ft: 15610 corp: 30/2376b lim: 105 exec/s: 52 rss: 75Mb L: 79/105 MS: 1 ChangeBinInt- 00:10:13.394 [2024-10-09 01:47:42.809097] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.394 [2024-10-09 01:47:42.809126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.394 [2024-10-09 01:47:42.809175] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.394 [2024-10-09 01:47:42.809194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.394 [2024-10-09 01:47:42.809246] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.394 [2024-10-09 01:47:42.809262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:13.394 [2024-10-09 01:47:42.809314] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.394 [2024-10-09 01:47:42.809329] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:13.394 #53 NEW cov: 12384 ft: 15616 corp: 31/2475b lim: 105 exec/s: 53 rss: 75Mb L: 99/105 MS: 1 CMP- DE: "\000\000\000\000\000\000\000\000"- 00:10:13.394 [2024-10-09 01:47:42.849343] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816972562359369663 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.394 [2024-10-09 01:47:42.849370] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.394 [2024-10-09 01:47:42.849426] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.394 [2024-10-09 01:47:42.849442] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.395 [2024-10-09 01:47:42.849490] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:15046755950033616831 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.849505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:13.395 [2024-10-09 01:47:42.849556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.849571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:13.395 [2024-10-09 01:47:42.849622] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.849638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:13.395 #54 NEW cov: 12384 ft: 15628 corp: 32/2580b lim: 105 exec/s: 54 rss: 75Mb L: 105/105 MS: 1 InsertRepeatedBytes- 00:10:13.395 [2024-10-09 01:47:42.889329] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.889356] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.395 [2024-10-09 01:47:42.889404] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.889419] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.395 [2024-10-09 01:47:42.889470] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.889485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:13.395 [2024-10-09 01:47:42.889536] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.889553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:13.395 #55 NEW cov: 12384 ft: 15633 corp: 33/2679b lim: 105 exec/s: 55 rss: 75Mb L: 99/105 MS: 1 ChangeBit- 00:10:13.395 [2024-10-09 01:47:42.929037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:235798528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.929064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.395 #56 NEW cov: 12384 ft: 15661 corp: 34/2703b lim: 105 exec/s: 56 rss: 75Mb L: 24/105 MS: 1 EraseBytes- 00:10:13.395 [2024-10-09 01:47:42.969279] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968323 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.969306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.395 [2024-10-09 01:47:42.969344] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:42.969359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.395 #57 NEW cov: 12384 ft: 15670 corp: 35/2760b lim: 105 exec/s: 57 rss: 75Mb L: 57/105 MS: 1 ChangeBinInt- 00:10:13.395 [2024-10-09 01:47:43.029450] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:43.029477] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.395 [2024-10-09 01:47:43.029517] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.395 [2024-10-09 01:47:43.029532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.654 [2024-10-09 01:47:43.069584] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:0 lba:13816973009035968447 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.654 [2024-10-09 01:47:43.069611] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:13.654 [2024-10-09 01:47:43.069666] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:0 lba:13816973012072644543 len:49088 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.654 [2024-10-09 01:47:43.069682] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:13.654 #59 NEW cov: 12384 ft: 15672 corp: 36/2821b lim: 105 exec/s: 29 rss: 75Mb L: 61/105 MS: 2 EraseBytes-ShuffleBytes- 00:10:13.654 #59 DONE cov: 12384 ft: 15672 corp: 36/2821b lim: 105 exec/s: 29 rss: 75Mb 00:10:13.654 ###### Recommended dictionary. ###### 00:10:13.654 "\000\002\000\000" # Uses: 1 00:10:13.654 "\000\000\000\000\000\000\000\000" # Uses: 0 00:10:13.654 ###### End of recommended dictionary. ###### 00:10:13.654 Done 59 runs in 2 second(s) 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_16.conf /var/tmp/suppress_nvmf_fuzz 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 17 1 0x1 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=17 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_17.conf 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 17 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4417 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4417"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:13.654 01:47:43 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4417' -c /tmp/fuzz_json_17.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 -Z 17 00:10:13.654 [2024-10-09 01:47:43.252981] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:13.654 [2024-10-09 01:47:43.253049] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4046171 ] 00:10:13.912 [2024-10-09 01:47:43.453020] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.912 [2024-10-09 01:47:43.491208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.912 [2024-10-09 01:47:43.550198] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.912 [2024-10-09 01:47:43.566405] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4417 *** 00:10:14.170 INFO: Running with entropic power schedule (0xFF, 100). 00:10:14.170 INFO: Seed: 2627438421 00:10:14.170 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:14.170 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:14.171 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_17 00:10:14.171 INFO: A corpus is not provided, starting from an empty corpus 00:10:14.171 #2 INITED exec/s: 0 rss: 66Mb 00:10:14.171 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:14.171 This may also happen if the target rejected all inputs we tried so far 00:10:14.171 [2024-10-09 01:47:43.615184] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.171 [2024-10-09 01:47:43.615216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.429 NEW_FUNC[1/716]: 0x455a88 in fuzz_nvm_write_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:540 00:10:14.429 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:14.429 #9 NEW cov: 12178 ft: 12160 corp: 2/40b lim: 120 exec/s: 0 rss: 73Mb L: 39/39 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:10:14.429 [2024-10-09 01:47:43.946128] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.429 [2024-10-09 01:47:43.946172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.429 #10 NEW cov: 12291 ft: 12689 corp: 3/80b lim: 120 exec/s: 0 rss: 73Mb L: 40/40 MS: 1 InsertByte- 00:10:14.429 [2024-10-09 01:47:44.006185] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.429 [2024-10-09 01:47:44.006215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.429 #11 NEW cov: 12297 ft: 12989 corp: 4/104b lim: 120 exec/s: 0 rss: 73Mb L: 24/40 MS: 1 EraseBytes- 00:10:14.429 [2024-10-09 01:47:44.066634] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.429 [2024-10-09 01:47:44.066660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.429 [2024-10-09 01:47:44.066707] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.429 [2024-10-09 01:47:44.066722] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:14.429 [2024-10-09 01:47:44.066776] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.429 [2024-10-09 01:47:44.066792] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:14.687 #14 NEW cov: 12382 ft: 14128 corp: 5/196b lim: 120 exec/s: 0 rss: 73Mb L: 92/92 MS: 3 EraseBytes-ChangeBit-InsertRepeatedBytes- 00:10:14.688 [2024-10-09 01:47:44.126488] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-09 01:47:44.126516] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.688 #15 NEW cov: 12382 ft: 14311 corp: 6/235b lim: 120 exec/s: 0 rss: 73Mb L: 39/92 MS: 1 CrossOver- 00:10:14.688 [2024-10-09 01:47:44.166646] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-09 01:47:44.166674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.688 #17 NEW cov: 12382 ft: 14452 corp: 7/276b lim: 120 exec/s: 0 rss: 73Mb L: 41/92 MS: 2 CopyPart-CrossOver- 00:10:14.688 [2024-10-09 01:47:44.206729] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-09 01:47:44.206758] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.688 #18 NEW cov: 12382 ft: 14534 corp: 8/316b lim: 120 exec/s: 0 rss: 73Mb L: 40/92 MS: 1 ChangeByte- 00:10:14.688 [2024-10-09 01:47:44.246849] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-09 01:47:44.246879] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.688 #19 NEW cov: 12382 ft: 14564 corp: 9/355b lim: 120 exec/s: 0 rss: 74Mb L: 39/92 MS: 1 ChangeBit- 00:10:14.688 [2024-10-09 01:47:44.307080] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-09 01:47:44.307107] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.688 #20 NEW cov: 12382 ft: 14590 corp: 10/395b lim: 120 exec/s: 0 rss: 74Mb L: 40/92 MS: 1 ChangeByte- 00:10:14.688 [2024-10-09 01:47:44.347156] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.688 [2024-10-09 01:47:44.347183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.946 #21 NEW cov: 12382 ft: 14619 corp: 11/434b lim: 120 exec/s: 0 rss: 74Mb L: 39/92 MS: 1 ChangeByte- 00:10:14.946 [2024-10-09 01:47:44.387267] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.946 [2024-10-09 01:47:44.387295] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.946 #22 NEW cov: 12382 ft: 14638 corp: 12/480b lim: 120 exec/s: 0 rss: 74Mb L: 46/92 MS: 1 InsertRepeatedBytes- 00:10:14.946 [2024-10-09 01:47:44.427364] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772207 len:171 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.946 [2024-10-09 01:47:44.427392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.946 #23 NEW cov: 12382 ft: 14675 corp: 13/504b lim: 120 exec/s: 0 rss: 74Mb L: 24/92 MS: 1 ChangeByte- 00:10:14.946 [2024-10-09 01:47:44.467471] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:14081 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.946 [2024-10-09 01:47:44.467499] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.946 #24 NEW cov: 12382 ft: 14759 corp: 14/543b lim: 120 exec/s: 0 rss: 74Mb L: 39/92 MS: 1 ChangeByte- 00:10:14.946 [2024-10-09 01:47:44.507635] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.946 [2024-10-09 01:47:44.507663] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.946 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:14.946 #25 NEW cov: 12405 ft: 14813 corp: 15/576b lim: 120 exec/s: 0 rss: 74Mb L: 33/92 MS: 1 EraseBytes- 00:10:14.946 [2024-10-09 01:47:44.567832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:14.946 [2024-10-09 01:47:44.567860] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:14.946 #26 NEW cov: 12405 ft: 14829 corp: 16/623b lim: 120 exec/s: 26 rss: 74Mb L: 47/92 MS: 1 InsertByte- 00:10:15.205 [2024-10-09 01:47:44.627974] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.205 [2024-10-09 01:47:44.628003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.205 #27 NEW cov: 12405 ft: 14887 corp: 17/663b lim: 120 exec/s: 27 rss: 74Mb L: 40/92 MS: 1 CopyPart- 00:10:15.205 [2024-10-09 01:47:44.688286] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:14081 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.205 [2024-10-09 01:47:44.688314] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.205 [2024-10-09 01:47:44.688359] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.205 [2024-10-09 01:47:44.688374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:15.205 #28 NEW cov: 12405 ft: 15204 corp: 18/722b lim: 120 exec/s: 28 rss: 74Mb L: 59/92 MS: 1 CopyPart- 00:10:15.205 [2024-10-09 01:47:44.748277] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.205 [2024-10-09 01:47:44.748303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.205 #34 NEW cov: 12405 ft: 15239 corp: 19/763b lim: 120 exec/s: 34 rss: 74Mb L: 41/92 MS: 1 CrossOver- 00:10:15.205 [2024-10-09 01:47:44.808598] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:14081 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.205 [2024-10-09 01:47:44.808624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.205 [2024-10-09 01:47:44.808661] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.205 [2024-10-09 01:47:44.808677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:15.205 #35 NEW cov: 12405 ft: 15347 corp: 20/822b lim: 120 exec/s: 35 rss: 74Mb L: 59/92 MS: 1 CMP- DE: "\377&$&\274\211m\212"- 00:10:15.205 [2024-10-09 01:47:44.868611] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167671496704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.205 [2024-10-09 01:47:44.868640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.464 #36 NEW cov: 12405 ft: 15399 corp: 21/861b lim: 120 exec/s: 36 rss: 74Mb L: 39/92 MS: 1 ChangeBinInt- 00:10:15.464 [2024-10-09 01:47:44.908744] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.464 [2024-10-09 01:47:44.908772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.464 #37 NEW cov: 12405 ft: 15412 corp: 22/899b lim: 120 exec/s: 37 rss: 74Mb L: 38/92 MS: 1 EraseBytes- 00:10:15.464 [2024-10-09 01:47:44.969199] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:14081 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.464 [2024-10-09 01:47:44.969227] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.464 [2024-10-09 01:47:44.969264] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.464 [2024-10-09 01:47:44.969279] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:15.464 [2024-10-09 01:47:44.969334] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:15481123719086080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.464 [2024-10-09 01:47:44.969350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:15.464 #38 NEW cov: 12405 ft: 15421 corp: 23/976b lim: 120 exec/s: 38 rss: 74Mb L: 77/92 MS: 1 CrossOver- 00:10:15.464 [2024-10-09 01:47:45.029084] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.464 [2024-10-09 01:47:45.029113] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.464 #39 NEW cov: 12405 ft: 15445 corp: 24/1015b lim: 120 exec/s: 39 rss: 74Mb L: 39/92 MS: 1 ShuffleBytes- 00:10:15.464 [2024-10-09 01:47:45.089240] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167773231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.464 [2024-10-09 01:47:45.089269] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.464 #40 NEW cov: 12405 ft: 15474 corp: 25/1055b lim: 120 exec/s: 40 rss: 74Mb L: 40/92 MS: 1 ChangeBinInt- 00:10:15.464 [2024-10-09 01:47:45.129509] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.464 [2024-10-09 01:47:45.129538] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.464 [2024-10-09 01:47:45.129594] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:134744072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.464 [2024-10-09 01:47:45.129614] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:15.722 #46 NEW cov: 12405 ft: 15507 corp: 26/1112b lim: 120 exec/s: 46 rss: 74Mb L: 57/92 MS: 1 InsertRepeatedBytes- 00:10:15.722 [2024-10-09 01:47:45.169504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.722 [2024-10-09 01:47:45.169531] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.722 #47 NEW cov: 12405 ft: 15518 corp: 27/1138b lim: 120 exec/s: 47 rss: 74Mb L: 26/92 MS: 1 EraseBytes- 00:10:15.722 [2024-10-09 01:47:45.209580] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.722 [2024-10-09 01:47:45.209610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.722 #48 NEW cov: 12405 ft: 15554 corp: 28/1168b lim: 120 exec/s: 48 rss: 74Mb L: 30/92 MS: 1 EraseBytes- 00:10:15.722 [2024-10-09 01:47:45.250027] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.722 [2024-10-09 01:47:45.250055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.722 [2024-10-09 01:47:45.250098] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.722 [2024-10-09 01:47:45.250115] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:15.722 [2024-10-09 01:47:45.250171] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.722 [2024-10-09 01:47:45.250187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:15.722 #49 NEW cov: 12405 ft: 15608 corp: 29/1258b lim: 120 exec/s: 49 rss: 75Mb L: 90/92 MS: 1 CopyPart- 00:10:15.722 [2024-10-09 01:47:45.309860] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.722 [2024-10-09 01:47:45.309888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.722 #50 NEW cov: 12405 ft: 15638 corp: 30/1288b lim: 120 exec/s: 50 rss: 75Mb L: 30/92 MS: 1 ChangeBit- 00:10:15.722 [2024-10-09 01:47:45.370037] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.722 [2024-10-09 01:47:45.370065] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.981 #51 NEW cov: 12405 ft: 15661 corp: 31/1330b lim: 120 exec/s: 51 rss: 75Mb L: 42/92 MS: 1 CrossOver- 00:10:15.981 [2024-10-09 01:47:45.430545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.981 [2024-10-09 01:47:45.430571] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.981 [2024-10-09 01:47:45.430608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.981 [2024-10-09 01:47:45.430624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:15.981 [2024-10-09 01:47:45.430679] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:0 lba:578721382704613384 len:2057 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.981 [2024-10-09 01:47:45.430698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:15.981 #52 NEW cov: 12405 ft: 15723 corp: 32/1420b lim: 120 exec/s: 52 rss: 75Mb L: 90/92 MS: 1 ChangeBit- 00:10:15.981 [2024-10-09 01:47:45.490356] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.981 [2024-10-09 01:47:45.490384] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.981 #53 NEW cov: 12405 ft: 15830 corp: 33/1446b lim: 120 exec/s: 53 rss: 75Mb L: 26/92 MS: 1 EraseBytes- 00:10:15.981 [2024-10-09 01:47:45.530627] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:13229324073175552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.981 [2024-10-09 01:47:45.530654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.981 [2024-10-09 01:47:45.530724] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.981 [2024-10-09 01:47:45.530741] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:15.981 #54 NEW cov: 12405 ft: 15843 corp: 34/1510b lim: 120 exec/s: 54 rss: 75Mb L: 64/92 MS: 1 CrossOver- 00:10:15.981 [2024-10-09 01:47:45.570574] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:0 lba:167772160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:15.981 [2024-10-09 01:47:45.570601] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:15.981 #55 NEW cov: 12405 ft: 15852 corp: 35/1549b lim: 120 exec/s: 27 rss: 75Mb L: 39/92 MS: 1 ChangeByte- 00:10:15.981 #55 DONE cov: 12405 ft: 15852 corp: 35/1549b lim: 120 exec/s: 27 rss: 75Mb 00:10:15.981 ###### Recommended dictionary. ###### 00:10:15.981 "\377&$&\274\211m\212" # Uses: 0 00:10:15.981 ###### End of recommended dictionary. ###### 00:10:15.981 Done 55 runs in 2 second(s) 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_17.conf /var/tmp/suppress_nvmf_fuzz 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 18 1 0x1 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=18 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_18.conf 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 18 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4418 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4418"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:16.239 01:47:45 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4418' -c /tmp/fuzz_json_18.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 -Z 18 00:10:16.239 [2024-10-09 01:47:45.771988] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:16.239 [2024-10-09 01:47:45.772050] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4046528 ] 00:10:16.497 [2024-10-09 01:47:45.970917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.498 [2024-10-09 01:47:46.009880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.498 [2024-10-09 01:47:46.068840] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.498 [2024-10-09 01:47:46.085051] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4418 *** 00:10:16.498 INFO: Running with entropic power schedule (0xFF, 100). 00:10:16.498 INFO: Seed: 852470757 00:10:16.498 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:16.498 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:16.498 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_18 00:10:16.498 INFO: A corpus is not provided, starting from an empty corpus 00:10:16.498 #2 INITED exec/s: 0 rss: 66Mb 00:10:16.498 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:16.498 This may also happen if the target rejected all inputs we tried so far 00:10:16.498 [2024-10-09 01:47:46.139909] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:16.498 [2024-10-09 01:47:46.139943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:16.498 [2024-10-09 01:47:46.139994] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:16.498 [2024-10-09 01:47:46.140011] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.014 NEW_FUNC[1/714]: 0x459378 in fuzz_nvm_write_zeroes_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:562 00:10:17.014 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:17.014 #4 NEW cov: 12121 ft: 12092 corp: 2/57b lim: 100 exec/s: 0 rss: 73Mb L: 56/56 MS: 2 CopyPart-InsertRepeatedBytes- 00:10:17.014 [2024-10-09 01:47:46.490853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.014 [2024-10-09 01:47:46.490893] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.014 [2024-10-09 01:47:46.490945] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.014 [2024-10-09 01:47:46.490962] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.014 [2024-10-09 01:47:46.490991] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.014 [2024-10-09 01:47:46.491006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.014 #5 NEW cov: 12234 ft: 12995 corp: 3/134b lim: 100 exec/s: 0 rss: 73Mb L: 77/77 MS: 1 InsertRepeatedBytes- 00:10:17.014 [2024-10-09 01:47:46.581014] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.014 [2024-10-09 01:47:46.581045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.014 [2024-10-09 01:47:46.581094] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.014 [2024-10-09 01:47:46.581114] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.014 [2024-10-09 01:47:46.581145] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.014 [2024-10-09 01:47:46.581160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.014 [2024-10-09 01:47:46.581188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.014 [2024-10-09 01:47:46.581202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.014 #6 NEW cov: 12240 ft: 13545 corp: 4/226b lim: 100 exec/s: 0 rss: 73Mb L: 92/92 MS: 1 CopyPart- 00:10:17.014 [2024-10-09 01:47:46.671113] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.014 [2024-10-09 01:47:46.671143] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.014 [2024-10-09 01:47:46.671193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.014 [2024-10-09 01:47:46.671211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.272 #7 NEW cov: 12325 ft: 13854 corp: 5/283b lim: 100 exec/s: 0 rss: 73Mb L: 57/92 MS: 1 InsertByte- 00:10:17.272 [2024-10-09 01:47:46.731358] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.272 [2024-10-09 01:47:46.731388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.272 [2024-10-09 01:47:46.731436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.272 [2024-10-09 01:47:46.731452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.272 [2024-10-09 01:47:46.731481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.272 [2024-10-09 01:47:46.731496] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.272 [2024-10-09 01:47:46.731524] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.272 [2024-10-09 01:47:46.731539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.272 #8 NEW cov: 12325 ft: 13934 corp: 6/369b lim: 100 exec/s: 0 rss: 73Mb L: 86/92 MS: 1 InsertRepeatedBytes- 00:10:17.272 [2024-10-09 01:47:46.791543] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.272 [2024-10-09 01:47:46.791573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.272 [2024-10-09 01:47:46.791622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.272 [2024-10-09 01:47:46.791639] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.272 [2024-10-09 01:47:46.791670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.272 [2024-10-09 01:47:46.791685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.272 [2024-10-09 01:47:46.791714] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.272 [2024-10-09 01:47:46.791729] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.272 #14 NEW cov: 12325 ft: 14002 corp: 7/462b lim: 100 exec/s: 0 rss: 73Mb L: 93/93 MS: 1 InsertByte- 00:10:17.272 [2024-10-09 01:47:46.881646] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.272 [2024-10-09 01:47:46.881674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.272 [2024-10-09 01:47:46.881722] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.272 [2024-10-09 01:47:46.881739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.535 #15 NEW cov: 12325 ft: 14098 corp: 8/519b lim: 100 exec/s: 0 rss: 74Mb L: 57/93 MS: 1 ChangeByte- 00:10:17.535 [2024-10-09 01:47:46.971983] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.535 [2024-10-09 01:47:46.972012] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.535 [2024-10-09 01:47:46.972059] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.535 [2024-10-09 01:47:46.972075] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.535 [2024-10-09 01:47:46.972105] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.535 [2024-10-09 01:47:46.972120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.535 [2024-10-09 01:47:46.972148] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.535 [2024-10-09 01:47:46.972163] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.535 #16 NEW cov: 12325 ft: 14143 corp: 9/611b lim: 100 exec/s: 0 rss: 74Mb L: 92/93 MS: 1 ChangeByte- 00:10:17.535 [2024-10-09 01:47:47.032129] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.535 [2024-10-09 01:47:47.032157] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.535 [2024-10-09 01:47:47.032205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.535 [2024-10-09 01:47:47.032221] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.535 [2024-10-09 01:47:47.032250] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.535 [2024-10-09 01:47:47.032265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.535 [2024-10-09 01:47:47.032292] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.535 [2024-10-09 01:47:47.032307] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.535 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:17.535 #17 NEW cov: 12342 ft: 14183 corp: 10/704b lim: 100 exec/s: 0 rss: 74Mb L: 93/93 MS: 1 ShuffleBytes- 00:10:17.535 [2024-10-09 01:47:47.122365] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.535 [2024-10-09 01:47:47.122392] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.535 [2024-10-09 01:47:47.122440] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.535 [2024-10-09 01:47:47.122456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.536 [2024-10-09 01:47:47.122486] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.536 [2024-10-09 01:47:47.122501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.536 [2024-10-09 01:47:47.122533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.536 [2024-10-09 01:47:47.122548] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.536 #19 NEW cov: 12342 ft: 14292 corp: 11/797b lim: 100 exec/s: 19 rss: 74Mb L: 93/93 MS: 2 ChangeBit-CrossOver- 00:10:17.536 [2024-10-09 01:47:47.182515] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.536 [2024-10-09 01:47:47.182543] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.536 [2024-10-09 01:47:47.182591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.536 [2024-10-09 01:47:47.182608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.536 [2024-10-09 01:47:47.182637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.536 [2024-10-09 01:47:47.182651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.536 [2024-10-09 01:47:47.182679] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.536 [2024-10-09 01:47:47.182694] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.796 #20 NEW cov: 12342 ft: 14360 corp: 12/890b lim: 100 exec/s: 20 rss: 74Mb L: 93/93 MS: 1 ShuffleBytes- 00:10:17.796 [2024-10-09 01:47:47.243615] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.796 [2024-10-09 01:47:47.243664] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.243747] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.796 [2024-10-09 01:47:47.243773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.243856] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.796 [2024-10-09 01:47:47.243882] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.796 #21 NEW cov: 12342 ft: 14530 corp: 13/967b lim: 100 exec/s: 21 rss: 74Mb L: 77/93 MS: 1 ChangeBit- 00:10:17.796 [2024-10-09 01:47:47.293556] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.796 [2024-10-09 01:47:47.293580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.293626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.796 [2024-10-09 01:47:47.293640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.293693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.796 [2024-10-09 01:47:47.293707] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.293758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.796 [2024-10-09 01:47:47.293772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.796 #22 NEW cov: 12342 ft: 14570 corp: 14/1064b lim: 100 exec/s: 22 rss: 74Mb L: 97/97 MS: 1 InsertRepeatedBytes- 00:10:17.796 [2024-10-09 01:47:47.353740] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.796 [2024-10-09 01:47:47.353770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.353809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.796 [2024-10-09 01:47:47.353830] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.353883] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.796 [2024-10-09 01:47:47.353898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.353952] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:17.796 [2024-10-09 01:47:47.353967] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:17.796 #23 NEW cov: 12342 ft: 14723 corp: 15/1157b lim: 100 exec/s: 23 rss: 74Mb L: 93/97 MS: 1 InsertByte- 00:10:17.796 [2024-10-09 01:47:47.393730] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.796 [2024-10-09 01:47:47.393754] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.393792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.796 [2024-10-09 01:47:47.393806] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.393863] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:17.796 [2024-10-09 01:47:47.393878] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:17.796 #24 NEW cov: 12342 ft: 14802 corp: 16/1235b lim: 100 exec/s: 24 rss: 74Mb L: 78/97 MS: 1 InsertByte- 00:10:17.796 [2024-10-09 01:47:47.433713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:17.796 [2024-10-09 01:47:47.433739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:17.796 [2024-10-09 01:47:47.433806] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:17.796 [2024-10-09 01:47:47.433832] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:17.796 #25 NEW cov: 12342 ft: 14838 corp: 17/1292b lim: 100 exec/s: 25 rss: 74Mb L: 57/97 MS: 1 InsertByte- 00:10:18.054 [2024-10-09 01:47:47.474096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.054 [2024-10-09 01:47:47.474120] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.054 [2024-10-09 01:47:47.474168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.054 [2024-10-09 01:47:47.474182] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.474236] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.055 [2024-10-09 01:47:47.474250] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.474302] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.055 [2024-10-09 01:47:47.474315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.055 #26 NEW cov: 12342 ft: 14853 corp: 18/1385b lim: 100 exec/s: 26 rss: 74Mb L: 93/97 MS: 1 ShuffleBytes- 00:10:18.055 [2024-10-09 01:47:47.534234] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.055 [2024-10-09 01:47:47.534263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.534319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.055 [2024-10-09 01:47:47.534334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.534385] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.055 [2024-10-09 01:47:47.534399] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.534451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.055 [2024-10-09 01:47:47.534466] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.055 #27 NEW cov: 12342 ft: 14872 corp: 19/1477b lim: 100 exec/s: 27 rss: 74Mb L: 92/97 MS: 1 ShuffleBytes- 00:10:18.055 [2024-10-09 01:47:47.574336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.055 [2024-10-09 01:47:47.574362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.574411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.055 [2024-10-09 01:47:47.574425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.574476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.055 [2024-10-09 01:47:47.574490] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.574542] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.055 [2024-10-09 01:47:47.574557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.055 #28 NEW cov: 12342 ft: 14893 corp: 20/1557b lim: 100 exec/s: 28 rss: 74Mb L: 80/97 MS: 1 EraseBytes- 00:10:18.055 [2024-10-09 01:47:47.614229] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.055 [2024-10-09 01:47:47.614254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.614290] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.055 [2024-10-09 01:47:47.614304] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.055 #29 NEW cov: 12342 ft: 14916 corp: 21/1614b lim: 100 exec/s: 29 rss: 74Mb L: 57/97 MS: 1 CrossOver- 00:10:18.055 [2024-10-09 01:47:47.654598] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.055 [2024-10-09 01:47:47.654624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.654671] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.055 [2024-10-09 01:47:47.654686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.654753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.055 [2024-10-09 01:47:47.654767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.654826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.055 [2024-10-09 01:47:47.654844] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.055 #30 NEW cov: 12342 ft: 14936 corp: 22/1695b lim: 100 exec/s: 30 rss: 74Mb L: 81/97 MS: 1 InsertByte- 00:10:18.055 [2024-10-09 01:47:47.714509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.055 [2024-10-09 01:47:47.714536] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.055 [2024-10-09 01:47:47.714588] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.055 [2024-10-09 01:47:47.714603] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.313 #31 NEW cov: 12342 ft: 15006 corp: 23/1752b lim: 100 exec/s: 31 rss: 74Mb L: 57/97 MS: 1 ChangeByte- 00:10:18.313 [2024-10-09 01:47:47.754663] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.313 [2024-10-09 01:47:47.754688] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.754729] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.313 [2024-10-09 01:47:47.754743] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.313 #32 NEW cov: 12342 ft: 15065 corp: 24/1805b lim: 100 exec/s: 32 rss: 74Mb L: 53/97 MS: 1 EraseBytes- 00:10:18.313 [2024-10-09 01:47:47.794974] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.313 [2024-10-09 01:47:47.794999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.795061] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.313 [2024-10-09 01:47:47.795076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.795127] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.313 [2024-10-09 01:47:47.795140] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.795194] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.313 [2024-10-09 01:47:47.795210] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.313 #33 NEW cov: 12342 ft: 15074 corp: 25/1898b lim: 100 exec/s: 33 rss: 74Mb L: 93/97 MS: 1 CopyPart- 00:10:18.313 [2024-10-09 01:47:47.855177] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.313 [2024-10-09 01:47:47.855202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.855249] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.313 [2024-10-09 01:47:47.855264] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.855319] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.313 [2024-10-09 01:47:47.855334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.855387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.313 [2024-10-09 01:47:47.855401] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.313 #34 NEW cov: 12342 ft: 15077 corp: 26/1979b lim: 100 exec/s: 34 rss: 74Mb L: 81/97 MS: 1 InsertByte- 00:10:18.313 [2024-10-09 01:47:47.895241] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.313 [2024-10-09 01:47:47.895267] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.895314] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.313 [2024-10-09 01:47:47.895328] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.895382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.313 [2024-10-09 01:47:47.895396] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.313 [2024-10-09 01:47:47.895451] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.314 [2024-10-09 01:47:47.895463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.314 #35 NEW cov: 12342 ft: 15139 corp: 27/2072b lim: 100 exec/s: 35 rss: 74Mb L: 93/97 MS: 1 ShuffleBytes- 00:10:18.314 [2024-10-09 01:47:47.935351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.314 [2024-10-09 01:47:47.935377] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.314 [2024-10-09 01:47:47.935426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.314 [2024-10-09 01:47:47.935440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.314 [2024-10-09 01:47:47.935508] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.314 [2024-10-09 01:47:47.935523] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.314 [2024-10-09 01:47:47.935580] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.314 [2024-10-09 01:47:47.935593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.314 #36 NEW cov: 12342 ft: 15153 corp: 28/2170b lim: 100 exec/s: 36 rss: 74Mb L: 98/98 MS: 1 CopyPart- 00:10:18.571 [2024-10-09 01:47:47.995519] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.572 [2024-10-09 01:47:47.995545] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:47.995607] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.572 [2024-10-09 01:47:47.995623] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:47.995676] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.572 [2024-10-09 01:47:47.995691] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:47.995745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.572 [2024-10-09 01:47:47.995760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.035609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.572 [2024-10-09 01:47:48.035634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.035687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.572 [2024-10-09 01:47:48.035705] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.035758] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.572 [2024-10-09 01:47:48.035772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.035826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.572 [2024-10-09 01:47:48.035840] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.572 #38 NEW cov: 12342 ft: 15172 corp: 29/2266b lim: 100 exec/s: 38 rss: 74Mb L: 96/98 MS: 2 InsertRepeatedBytes-CrossOver- 00:10:18.572 [2024-10-09 01:47:48.075629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.572 [2024-10-09 01:47:48.075654] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.075690] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.572 [2024-10-09 01:47:48.075704] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.075756] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.572 [2024-10-09 01:47:48.075770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.572 #39 NEW cov: 12342 ft: 15194 corp: 30/2343b lim: 100 exec/s: 39 rss: 75Mb L: 77/98 MS: 1 ChangeBit- 00:10:18.572 [2024-10-09 01:47:48.135938] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:0 nsid:0 00:10:18.572 [2024-10-09 01:47:48.135964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.136029] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:1 nsid:0 00:10:18.572 [2024-10-09 01:47:48.136044] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.136097] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:2 nsid:0 00:10:18.572 [2024-10-09 01:47:48.136110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:18.572 [2024-10-09 01:47:48.136163] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: WRITE ZEROES (08) sqid:1 cid:3 nsid:0 00:10:18.572 [2024-10-09 01:47:48.136178] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:18.572 #40 NEW cov: 12342 ft: 15199 corp: 31/2436b lim: 100 exec/s: 20 rss: 75Mb L: 93/98 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\017"- 00:10:18.572 #40 DONE cov: 12342 ft: 15199 corp: 31/2436b lim: 100 exec/s: 20 rss: 75Mb 00:10:18.572 ###### Recommended dictionary. ###### 00:10:18.572 "\377\377\377\377\377\377\377\017" # Uses: 0 00:10:18.572 ###### End of recommended dictionary. ###### 00:10:18.572 Done 40 runs in 2 second(s) 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_18.conf /var/tmp/suppress_nvmf_fuzz 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 19 1 0x1 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=19 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_19.conf 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 19 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4419 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4419"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:18.830 01:47:48 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4419' -c /tmp/fuzz_json_19.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 -Z 19 00:10:18.830 [2024-10-09 01:47:48.337176] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:18.830 [2024-10-09 01:47:48.337242] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4046881 ] 00:10:19.088 [2024-10-09 01:47:48.535461] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.088 [2024-10-09 01:47:48.574646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.088 [2024-10-09 01:47:48.633769] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.088 [2024-10-09 01:47:48.649982] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4419 *** 00:10:19.088 INFO: Running with entropic power schedule (0xFF, 100). 00:10:19.088 INFO: Seed: 3418457602 00:10:19.088 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:19.088 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:19.088 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_19 00:10:19.088 INFO: A corpus is not provided, starting from an empty corpus 00:10:19.088 #2 INITED exec/s: 0 rss: 66Mb 00:10:19.088 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:19.088 This may also happen if the target rejected all inputs we tried so far 00:10:19.088 [2024-10-09 01:47:48.694883] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:19.088 [2024-10-09 01:47:48.694918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.088 [2024-10-09 01:47:48.694968] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:19.088 [2024-10-09 01:47:48.694986] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.088 [2024-10-09 01:47:48.695015] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:19.088 [2024-10-09 01:47:48.695032] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:19.088 [2024-10-09 01:47:48.695060] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:19.088 [2024-10-09 01:47:48.695080] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:19.603 NEW_FUNC[1/714]: 0x45c338 in fuzz_nvm_write_uncorrectable_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:582 00:10:19.603 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:19.603 #3 NEW cov: 12096 ft: 12097 corp: 2/50b lim: 50 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 InsertRepeatedBytes- 00:10:19.603 [2024-10-09 01:47:49.068213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445604941566 len:65279 00:10:19.603 [2024-10-09 01:47:49.068263] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.603 [2024-10-09 01:47:49.068335] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:19.603 [2024-10-09 01:47:49.068359] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.603 [2024-10-09 01:47:49.068452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:19.603 [2024-10-09 01:47:49.068475] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.068572] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:19.604 [2024-10-09 01:47:49.068594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:19.604 #4 NEW cov: 12212 ft: 12859 corp: 3/99b lim: 50 exec/s: 0 rss: 73Mb L: 49/49 MS: 1 ChangeByte- 00:10:19.604 [2024-10-09 01:47:49.148146] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445604941566 len:65279 00:10:19.604 [2024-10-09 01:47:49.148180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.148248] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:19.604 [2024-10-09 01:47:49.148266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.148324] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:19.604 [2024-10-09 01:47:49.148340] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:19.604 #5 NEW cov: 12218 ft: 13326 corp: 4/133b lim: 50 exec/s: 0 rss: 73Mb L: 34/49 MS: 1 EraseBytes- 00:10:19.604 [2024-10-09 01:47:49.218340] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 00:10:19.604 [2024-10-09 01:47:49.218374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.218452] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:10:19.604 [2024-10-09 01:47:49.218471] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.218532] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:51401 00:10:19.604 [2024-10-09 01:47:49.218549] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:19.604 #9 NEW cov: 12303 ft: 13599 corp: 5/165b lim: 50 exec/s: 0 rss: 73Mb L: 32/49 MS: 4 ShuffleBytes-ChangeByte-ShuffleBytes-InsertRepeatedBytes- 00:10:19.604 [2024-10-09 01:47:49.269022] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:19.604 [2024-10-09 01:47:49.269051] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.269137] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:19.604 [2024-10-09 01:47:49.269153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.269229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65535 00:10:19.604 [2024-10-09 01:47:49.269245] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.269333] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:19.604 [2024-10-09 01:47:49.269353] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:19.604 [2024-10-09 01:47:49.269439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18374403900871474942 len:65035 00:10:19.604 [2024-10-09 01:47:49.269460] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:19.862 #10 NEW cov: 12303 ft: 13708 corp: 6/215b lim: 50 exec/s: 0 rss: 73Mb L: 50/50 MS: 1 InsertByte- 00:10:19.862 [2024-10-09 01:47:49.319065] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445604941566 len:65279 00:10:19.862 [2024-10-09 01:47:49.319097] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.319173] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:19.862 [2024-10-09 01:47:49.319194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.319273] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:19.862 [2024-10-09 01:47:49.319289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.319379] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:19.862 [2024-10-09 01:47:49.319400] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:19.862 #11 NEW cov: 12303 ft: 13752 corp: 7/264b lim: 50 exec/s: 0 rss: 73Mb L: 49/50 MS: 1 ChangeBinInt- 00:10:19.862 [2024-10-09 01:47:49.368946] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 00:10:19.862 [2024-10-09 01:47:49.368975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.369045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:10:19.862 [2024-10-09 01:47:49.369067] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.369134] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:51401 00:10:19.862 [2024-10-09 01:47:49.369153] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:19.862 #17 NEW cov: 12303 ft: 13785 corp: 8/296b lim: 50 exec/s: 0 rss: 73Mb L: 32/50 MS: 1 CopyPart- 00:10:19.862 [2024-10-09 01:47:49.439237] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445604941566 len:65279 00:10:19.862 [2024-10-09 01:47:49.439265] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.439330] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:19.862 [2024-10-09 01:47:49.439345] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.439406] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:19.862 [2024-10-09 01:47:49.439425] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:19.862 #18 NEW cov: 12303 ft: 13817 corp: 9/330b lim: 50 exec/s: 0 rss: 74Mb L: 34/50 MS: 1 CrossOver- 00:10:19.862 [2024-10-09 01:47:49.509545] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:7854277753502746824 len:51401 00:10:19.862 [2024-10-09 01:47:49.509574] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.509643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:10:19.862 [2024-10-09 01:47:49.509662] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:19.862 [2024-10-09 01:47:49.509709] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:51401 00:10:19.862 [2024-10-09 01:47:49.509726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.120 #19 NEW cov: 12303 ft: 13845 corp: 10/362b lim: 50 exec/s: 0 rss: 74Mb L: 32/50 MS: 1 CMP- DE: "m\000\000\000"- 00:10:20.120 [2024-10-09 01:47:49.579710] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 00:10:20.120 [2024-10-09 01:47:49.579740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.120 [2024-10-09 01:47:49.579817] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:10:20.120 [2024-10-09 01:47:49.579836] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.120 [2024-10-09 01:47:49.579910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:51401 00:10:20.120 [2024-10-09 01:47:49.579930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.120 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:20.120 #20 NEW cov: 12326 ft: 13917 corp: 11/394b lim: 50 exec/s: 0 rss: 74Mb L: 32/50 MS: 1 ShuffleBytes- 00:10:20.120 [2024-10-09 01:47:49.629910] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14467908123778140360 len:51401 00:10:20.120 [2024-10-09 01:47:49.629941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.120 [2024-10-09 01:47:49.630002] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:10:20.120 [2024-10-09 01:47:49.630021] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.120 [2024-10-09 01:47:49.630062] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:51401 00:10:20.120 [2024-10-09 01:47:49.630078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.120 #21 NEW cov: 12326 ft: 13933 corp: 12/427b lim: 50 exec/s: 0 rss: 74Mb L: 33/50 MS: 1 InsertByte- 00:10:20.120 [2024-10-09 01:47:49.700353] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:20.120 [2024-10-09 01:47:49.700382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.120 [2024-10-09 01:47:49.700461] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.120 [2024-10-09 01:47:49.700478] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.120 [2024-10-09 01:47:49.700556] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65535 00:10:20.120 [2024-10-09 01:47:49.700576] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.120 [2024-10-09 01:47:49.700668] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:20.120 [2024-10-09 01:47:49.700686] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.120 #22 NEW cov: 12326 ft: 13984 corp: 13/475b lim: 50 exec/s: 22 rss: 74Mb L: 48/50 MS: 1 CrossOver- 00:10:20.120 [2024-10-09 01:47:49.770657] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:20.121 [2024-10-09 01:47:49.770687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.121 [2024-10-09 01:47:49.770774] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.121 [2024-10-09 01:47:49.770793] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.121 [2024-10-09 01:47:49.770886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:20.121 [2024-10-09 01:47:49.770906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.121 [2024-10-09 01:47:49.770998] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871470846 len:65279 00:10:20.121 [2024-10-09 01:47:49.771017] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.379 #23 NEW cov: 12326 ft: 13994 corp: 14/524b lim: 50 exec/s: 23 rss: 74Mb L: 49/50 MS: 1 ChangeBit- 00:10:20.379 [2024-10-09 01:47:49.820943] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871442942 len:65279 00:10:20.379 [2024-10-09 01:47:49.820971] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.821036] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.821054] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.821114] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.821129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.821221] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.821239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.379 #24 NEW cov: 12326 ft: 14014 corp: 15/573b lim: 50 exec/s: 24 rss: 74Mb L: 49/50 MS: 1 ChangeByte- 00:10:20.379 [2024-10-09 01:47:49.871220] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445604941566 len:65279 00:10:20.379 [2024-10-09 01:47:49.871248] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.871316] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.871334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.871411] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.871426] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.871508] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.871525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.379 #25 NEW cov: 12326 ft: 14041 corp: 16/622b lim: 50 exec/s: 25 rss: 74Mb L: 49/50 MS: 1 ChangeBinInt- 00:10:20.379 [2024-10-09 01:47:49.941141] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.941169] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.941258] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.941274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.941354] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18321205130273160958 len:65279 00:10:20.379 [2024-10-09 01:47:49.941368] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.941453] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.941472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.379 #26 NEW cov: 12326 ft: 14111 corp: 17/671b lim: 50 exec/s: 26 rss: 74Mb L: 49/50 MS: 1 ChangeByte- 00:10:20.379 [2024-10-09 01:47:49.991670] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.991698] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.991760] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.991777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.991855] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.991873] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:49.991964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:49.991984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.379 #27 NEW cov: 12326 ft: 14114 corp: 18/720b lim: 50 exec/s: 27 rss: 74Mb L: 49/50 MS: 1 ChangeByte- 00:10:20.379 [2024-10-09 01:47:50.042213] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:20.379 [2024-10-09 01:47:50.042247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.379 [2024-10-09 01:47:50.042299] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374344295315341054 len:51401 00:10:20.379 [2024-10-09 01:47:50.042317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.380 [2024-10-09 01:47:50.042402] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468094173171468488 len:65279 00:10:20.380 [2024-10-09 01:47:50.042421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.380 [2024-10-09 01:47:50.042504] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65481 00:10:20.380 [2024-10-09 01:47:50.042524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.637 #28 NEW cov: 12326 ft: 14162 corp: 19/765b lim: 50 exec/s: 28 rss: 74Mb L: 45/50 MS: 1 CrossOver- 00:10:20.638 [2024-10-09 01:47:50.112229] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445470723838 len:65279 00:10:20.638 [2024-10-09 01:47:50.112270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.112372] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.638 [2024-10-09 01:47:50.112390] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.112481] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:20.638 [2024-10-09 01:47:50.112498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.638 #29 NEW cov: 12326 ft: 14287 corp: 20/799b lim: 50 exec/s: 29 rss: 74Mb L: 34/50 MS: 1 ChangeBinInt- 00:10:20.638 [2024-10-09 01:47:50.182643] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445470723838 len:65279 00:10:20.638 [2024-10-09 01:47:50.182680] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.182783] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.638 [2024-10-09 01:47:50.182799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.182886] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374402998928342782 len:65279 00:10:20.638 [2024-10-09 01:47:50.182907] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.638 #30 NEW cov: 12326 ft: 14302 corp: 21/833b lim: 50 exec/s: 30 rss: 74Mb L: 34/50 MS: 1 ChangeByte- 00:10:20.638 [2024-10-09 01:47:50.253311] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18446744072783186120 len:65536 00:10:20.638 [2024-10-09 01:47:50.253344] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.253395] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 00:10:20.638 [2024-10-09 01:47:50.253415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.253487] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034568538081480 len:51401 00:10:20.638 [2024-10-09 01:47:50.253505] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.253586] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:14468034567615334600 len:51401 00:10:20.638 [2024-10-09 01:47:50.253604] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.638 #31 NEW cov: 12326 ft: 14319 corp: 22/882b lim: 50 exec/s: 31 rss: 74Mb L: 49/50 MS: 1 InsertRepeatedBytes- 00:10:20.638 [2024-10-09 01:47:50.303686] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:20.638 [2024-10-09 01:47:50.303716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.303785] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.638 [2024-10-09 01:47:50.303804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.303895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65283 00:10:20.638 [2024-10-09 01:47:50.303916] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.638 [2024-10-09 01:47:50.304011] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:72346757022941441 len:65279 00:10:20.638 [2024-10-09 01:47:50.304045] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.896 #32 NEW cov: 12326 ft: 14322 corp: 23/930b lim: 50 exec/s: 32 rss: 74Mb L: 48/50 MS: 1 ChangeBinInt- 00:10:20.896 [2024-10-09 01:47:50.354130] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:20.896 [2024-10-09 01:47:50.354160] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.354233] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.896 [2024-10-09 01:47:50.354252] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.354320] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:20.896 [2024-10-09 01:47:50.354338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.354419] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374166406359871230 len:65279 00:10:20.896 [2024-10-09 01:47:50.354436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.354523] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18374403900871474942 len:65035 00:10:20.896 [2024-10-09 01:47:50.354541] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:20.896 #33 NEW cov: 12326 ft: 14341 corp: 24/980b lim: 50 exec/s: 33 rss: 74Mb L: 50/50 MS: 1 InsertByte- 00:10:20.896 [2024-10-09 01:47:50.424045] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14468034567615334600 len:51401 00:10:20.896 [2024-10-09 01:47:50.424076] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.424168] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:10:20.896 [2024-10-09 01:47:50.424186] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.896 #34 NEW cov: 12326 ft: 14594 corp: 25/1003b lim: 50 exec/s: 34 rss: 74Mb L: 23/50 MS: 1 EraseBytes- 00:10:20.896 [2024-10-09 01:47:50.474548] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445604941566 len:65279 00:10:20.896 [2024-10-09 01:47:50.474579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.474641] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871472894 len:65279 00:10:20.896 [2024-10-09 01:47:50.474660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.474743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:20.896 [2024-10-09 01:47:50.474759] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.896 #35 NEW cov: 12326 ft: 14603 corp: 26/1037b lim: 50 exec/s: 35 rss: 74Mb L: 34/50 MS: 1 ChangeBit- 00:10:20.896 [2024-10-09 01:47:50.525433] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403900871474942 len:65279 00:10:20.896 [2024-10-09 01:47:50.525463] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.525531] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871474942 len:65279 00:10:20.896 [2024-10-09 01:47:50.525550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.525618] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374196093173825278 len:65279 00:10:20.896 [2024-10-09 01:47:50.525636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.525722] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:3 nsid:0 lba:18374403900871474942 len:65279 00:10:20.896 [2024-10-09 01:47:50.525739] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:20.896 [2024-10-09 01:47:50.525834] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:4 nsid:0 lba:18374403900871474942 len:65035 00:10:20.896 [2024-10-09 01:47:50.525855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:21.154 #36 NEW cov: 12326 ft: 14672 corp: 27/1087b lim: 50 exec/s: 36 rss: 74Mb L: 50/50 MS: 1 CrossOver- 00:10:21.154 [2024-10-09 01:47:50.595410] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:14467908123778140360 len:51401 00:10:21.154 [2024-10-09 01:47:50.595444] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:21.154 [2024-10-09 01:47:50.595522] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:14468034567615334600 len:51401 00:10:21.154 [2024-10-09 01:47:50.595539] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:21.154 [2024-10-09 01:47:50.595639] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:14468034567615334600 len:51401 00:10:21.154 [2024-10-09 01:47:50.595660] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:21.154 #37 NEW cov: 12326 ft: 14685 corp: 28/1120b lim: 50 exec/s: 37 rss: 74Mb L: 33/50 MS: 1 ShuffleBytes- 00:10:21.154 [2024-10-09 01:47:50.665953] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:0 nsid:0 lba:18374403445604941566 len:65279 00:10:21.154 [2024-10-09 01:47:50.665984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:21.155 [2024-10-09 01:47:50.666053] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:1 nsid:0 lba:18374403900871472894 len:65279 00:10:21.155 [2024-10-09 01:47:50.666070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:21.155 [2024-10-09 01:47:50.666113] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: WRITE UNCORRECTABLE sqid:1 cid:2 nsid:0 lba:18374403900871474942 len:65279 00:10:21.155 [2024-10-09 01:47:50.666129] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:21.155 #38 NEW cov: 12326 ft: 14732 corp: 29/1154b lim: 50 exec/s: 19 rss: 75Mb L: 34/50 MS: 1 ShuffleBytes- 00:10:21.155 #38 DONE cov: 12326 ft: 14732 corp: 29/1154b lim: 50 exec/s: 19 rss: 75Mb 00:10:21.155 ###### Recommended dictionary. ###### 00:10:21.155 "m\000\000\000" # Uses: 0 00:10:21.155 ###### End of recommended dictionary. ###### 00:10:21.155 Done 38 runs in 2 second(s) 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_19.conf /var/tmp/suppress_nvmf_fuzz 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 20 1 0x1 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=20 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_20.conf 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 20 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4420 00:10:21.155 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:10:21.413 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' 00:10:21.413 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4420"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:21.413 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:21.413 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:21.413 01:47:50 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4420' -c /tmp/fuzz_json_20.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 -Z 20 00:10:21.413 [2024-10-09 01:47:50.858921] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:21.413 [2024-10-09 01:47:50.858988] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047243 ] 00:10:21.413 [2024-10-09 01:47:51.057106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.671 [2024-10-09 01:47:51.096488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.671 [2024-10-09 01:47:51.155622] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.671 [2024-10-09 01:47:51.171823] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:10:21.671 INFO: Running with entropic power schedule (0xFF, 100). 00:10:21.671 INFO: Seed: 1642494818 00:10:21.671 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:21.671 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:21.671 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_20 00:10:21.671 INFO: A corpus is not provided, starting from an empty corpus 00:10:21.671 #2 INITED exec/s: 0 rss: 66Mb 00:10:21.671 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:21.671 This may also happen if the target rejected all inputs we tried so far 00:10:21.671 [2024-10-09 01:47:51.219458] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:21.671 [2024-10-09 01:47:51.219488] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:21.671 [2024-10-09 01:47:51.219541] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:21.671 [2024-10-09 01:47:51.219556] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:21.929 NEW_FUNC[1/716]: 0x45def8 in fuzz_nvm_reservation_acquire_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:597 00:10:21.929 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:21.929 #3 NEW cov: 12157 ft: 12154 corp: 2/54b lim: 90 exec/s: 0 rss: 74Mb L: 53/53 MS: 1 InsertRepeatedBytes- 00:10:21.929 [2024-10-09 01:47:51.540603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:21.929 [2024-10-09 01:47:51.540651] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:21.929 [2024-10-09 01:47:51.540734] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:21.929 [2024-10-09 01:47:51.540757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:21.929 #4 NEW cov: 12270 ft: 12802 corp: 3/107b lim: 90 exec/s: 0 rss: 74Mb L: 53/53 MS: 1 ChangeBit- 00:10:22.188 [2024-10-09 01:47:51.600554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.188 [2024-10-09 01:47:51.600585] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.188 [2024-10-09 01:47:51.600625] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.188 [2024-10-09 01:47:51.600640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.188 [2024-10-09 01:47:51.600693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.188 [2024-10-09 01:47:51.600708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.188 #5 NEW cov: 12276 ft: 13343 corp: 4/163b lim: 90 exec/s: 0 rss: 74Mb L: 56/56 MS: 1 CopyPart- 00:10:22.188 [2024-10-09 01:47:51.640589] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.188 [2024-10-09 01:47:51.640618] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.188 [2024-10-09 01:47:51.640657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.188 [2024-10-09 01:47:51.640673] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.188 [2024-10-09 01:47:51.640727] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.188 [2024-10-09 01:47:51.640742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.188 #6 NEW cov: 12361 ft: 13545 corp: 5/219b lim: 90 exec/s: 0 rss: 74Mb L: 56/56 MS: 1 ChangeByte- 00:10:22.188 [2024-10-09 01:47:51.700799] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.188 [2024-10-09 01:47:51.700833] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.188 [2024-10-09 01:47:51.700874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.188 [2024-10-09 01:47:51.700890] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.188 [2024-10-09 01:47:51.700949] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.188 [2024-10-09 01:47:51.700966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.188 #7 NEW cov: 12361 ft: 13760 corp: 6/275b lim: 90 exec/s: 0 rss: 74Mb L: 56/56 MS: 1 ChangeByte- 00:10:22.188 [2024-10-09 01:47:51.740911] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.188 [2024-10-09 01:47:51.740939] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.188 [2024-10-09 01:47:51.740977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.188 [2024-10-09 01:47:51.740992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.188 [2024-10-09 01:47:51.741048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.188 [2024-10-09 01:47:51.741063] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.188 #8 NEW cov: 12361 ft: 13888 corp: 7/331b lim: 90 exec/s: 0 rss: 74Mb L: 56/56 MS: 1 CMP- DE: "\377\377\377\377\000\000\000\000"- 00:10:22.188 [2024-10-09 01:47:51.800783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.188 [2024-10-09 01:47:51.800811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.188 #9 NEW cov: 12361 ft: 14770 corp: 8/359b lim: 90 exec/s: 0 rss: 74Mb L: 28/56 MS: 1 CrossOver- 00:10:22.446 [2024-10-09 01:47:51.861269] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.446 [2024-10-09 01:47:51.861298] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:51.861338] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.446 [2024-10-09 01:47:51.861354] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:51.861411] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.446 [2024-10-09 01:47:51.861427] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.446 #10 NEW cov: 12361 ft: 14844 corp: 9/420b lim: 90 exec/s: 0 rss: 74Mb L: 61/61 MS: 1 InsertRepeatedBytes- 00:10:22.446 [2024-10-09 01:47:51.921260] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.446 [2024-10-09 01:47:51.921288] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:51.921332] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.446 [2024-10-09 01:47:51.921348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.446 #11 NEW cov: 12361 ft: 14917 corp: 10/473b lim: 90 exec/s: 0 rss: 74Mb L: 53/61 MS: 1 ChangeBit- 00:10:22.446 [2024-10-09 01:47:51.961523] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.446 [2024-10-09 01:47:51.961550] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:51.961591] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.446 [2024-10-09 01:47:51.961606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:51.961660] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.446 [2024-10-09 01:47:51.961676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.446 #12 NEW cov: 12361 ft: 14949 corp: 11/529b lim: 90 exec/s: 0 rss: 74Mb L: 56/61 MS: 1 ShuffleBytes- 00:10:22.446 [2024-10-09 01:47:52.001647] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.446 [2024-10-09 01:47:52.001675] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:52.001713] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.446 [2024-10-09 01:47:52.001728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:52.001783] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.446 [2024-10-09 01:47:52.001799] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.446 #18 NEW cov: 12361 ft: 15007 corp: 12/591b lim: 90 exec/s: 0 rss: 74Mb L: 62/62 MS: 1 CrossOver- 00:10:22.446 [2024-10-09 01:47:52.061832] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.446 [2024-10-09 01:47:52.061859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:52.061898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.446 [2024-10-09 01:47:52.061914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.446 [2024-10-09 01:47:52.061969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.447 [2024-10-09 01:47:52.062001] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.447 #19 NEW cov: 12361 ft: 15046 corp: 13/654b lim: 90 exec/s: 0 rss: 74Mb L: 63/63 MS: 1 InsertRepeatedBytes- 00:10:22.447 [2024-10-09 01:47:52.101791] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.447 [2024-10-09 01:47:52.101822] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.447 [2024-10-09 01:47:52.101864] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.447 [2024-10-09 01:47:52.101880] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.704 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:22.704 #20 NEW cov: 12384 ft: 15084 corp: 14/707b lim: 90 exec/s: 0 rss: 75Mb L: 53/63 MS: 1 ChangeByte- 00:10:22.704 [2024-10-09 01:47:52.162125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.704 [2024-10-09 01:47:52.162152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.162193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.704 [2024-10-09 01:47:52.162208] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.162266] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.704 [2024-10-09 01:47:52.162282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.704 #21 NEW cov: 12384 ft: 15122 corp: 15/763b lim: 90 exec/s: 0 rss: 75Mb L: 56/63 MS: 1 ChangeByte- 00:10:22.704 [2024-10-09 01:47:52.202193] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.704 [2024-10-09 01:47:52.202220] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.202261] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.704 [2024-10-09 01:47:52.202276] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.202331] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.704 [2024-10-09 01:47:52.202346] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.704 #22 NEW cov: 12384 ft: 15163 corp: 16/819b lim: 90 exec/s: 22 rss: 75Mb L: 56/63 MS: 1 CopyPart- 00:10:22.704 [2024-10-09 01:47:52.242157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.704 [2024-10-09 01:47:52.242183] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.242223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.704 [2024-10-09 01:47:52.242239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.704 #23 NEW cov: 12384 ft: 15185 corp: 17/872b lim: 90 exec/s: 23 rss: 75Mb L: 53/63 MS: 1 ShuffleBytes- 00:10:22.704 [2024-10-09 01:47:52.302498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.704 [2024-10-09 01:47:52.302524] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.302573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.704 [2024-10-09 01:47:52.302588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.302644] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.704 [2024-10-09 01:47:52.302659] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.704 #24 NEW cov: 12384 ft: 15220 corp: 18/934b lim: 90 exec/s: 24 rss: 75Mb L: 62/63 MS: 1 PersAutoDict- DE: "\377\377\377\377\000\000\000\000"- 00:10:22.704 [2024-10-09 01:47:52.362651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.704 [2024-10-09 01:47:52.362679] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.362718] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.704 [2024-10-09 01:47:52.362734] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.704 [2024-10-09 01:47:52.362788] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.704 [2024-10-09 01:47:52.362825] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.962 #25 NEW cov: 12384 ft: 15249 corp: 19/995b lim: 90 exec/s: 25 rss: 75Mb L: 61/63 MS: 1 ShuffleBytes- 00:10:22.962 [2024-10-09 01:47:52.402752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.962 [2024-10-09 01:47:52.402780] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.402825] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.962 [2024-10-09 01:47:52.402841] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.402898] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.962 [2024-10-09 01:47:52.402914] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.962 #26 NEW cov: 12384 ft: 15259 corp: 20/1064b lim: 90 exec/s: 26 rss: 75Mb L: 69/69 MS: 1 PersAutoDict- DE: "\377\377\377\377\000\000\000\000"- 00:10:22.962 [2024-10-09 01:47:52.462818] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.962 [2024-10-09 01:47:52.462845] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.462895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.962 [2024-10-09 01:47:52.462910] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.962 #27 NEW cov: 12384 ft: 15271 corp: 21/1100b lim: 90 exec/s: 27 rss: 75Mb L: 36/69 MS: 1 PersAutoDict- DE: "\377\377\377\377\000\000\000\000"- 00:10:22.962 [2024-10-09 01:47:52.523107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.962 [2024-10-09 01:47:52.523137] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.523196] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.962 [2024-10-09 01:47:52.523212] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.523268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.962 [2024-10-09 01:47:52.523282] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.962 #28 NEW cov: 12384 ft: 15305 corp: 22/1169b lim: 90 exec/s: 28 rss: 75Mb L: 69/69 MS: 1 ChangeByte- 00:10:22.962 [2024-10-09 01:47:52.583262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.962 [2024-10-09 01:47:52.583289] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.583339] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.962 [2024-10-09 01:47:52.583355] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.583413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.962 [2024-10-09 01:47:52.583428] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:22.962 #29 NEW cov: 12384 ft: 15330 corp: 23/1223b lim: 90 exec/s: 29 rss: 75Mb L: 54/69 MS: 1 InsertByte- 00:10:22.962 [2024-10-09 01:47:52.623382] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:22.962 [2024-10-09 01:47:52.623408] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.623455] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:22.962 [2024-10-09 01:47:52.623470] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:22.962 [2024-10-09 01:47:52.623537] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:22.962 [2024-10-09 01:47:52.623553] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.219 #30 NEW cov: 12384 ft: 15354 corp: 24/1279b lim: 90 exec/s: 30 rss: 75Mb L: 56/69 MS: 1 ShuffleBytes- 00:10:23.219 [2024-10-09 01:47:52.683557] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.219 [2024-10-09 01:47:52.683584] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.219 [2024-10-09 01:47:52.683627] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.219 [2024-10-09 01:47:52.683642] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.219 [2024-10-09 01:47:52.683695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.219 [2024-10-09 01:47:52.683710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.219 #31 NEW cov: 12384 ft: 15397 corp: 25/1336b lim: 90 exec/s: 31 rss: 75Mb L: 57/69 MS: 1 InsertByte- 00:10:23.219 [2024-10-09 01:47:52.743706] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.219 [2024-10-09 01:47:52.743733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.219 [2024-10-09 01:47:52.743770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.219 [2024-10-09 01:47:52.743789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.219 [2024-10-09 01:47:52.743848] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.219 [2024-10-09 01:47:52.743864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.219 #32 NEW cov: 12384 ft: 15428 corp: 26/1398b lim: 90 exec/s: 32 rss: 75Mb L: 62/69 MS: 1 ChangeBinInt- 00:10:23.219 [2024-10-09 01:47:52.803893] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.219 [2024-10-09 01:47:52.803920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.219 [2024-10-09 01:47:52.803958] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.219 [2024-10-09 01:47:52.803974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.219 [2024-10-09 01:47:52.804028] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.219 [2024-10-09 01:47:52.804043] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.219 #33 NEW cov: 12384 ft: 15480 corp: 27/1452b lim: 90 exec/s: 33 rss: 75Mb L: 54/69 MS: 1 ChangeBinInt- 00:10:23.219 [2024-10-09 01:47:52.863889] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.219 [2024-10-09 01:47:52.863918] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.219 [2024-10-09 01:47:52.863977] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.219 [2024-10-09 01:47:52.863993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.219 #34 NEW cov: 12384 ft: 15482 corp: 28/1505b lim: 90 exec/s: 34 rss: 75Mb L: 53/69 MS: 1 CrossOver- 00:10:23.477 [2024-10-09 01:47:52.904274] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.477 [2024-10-09 01:47:52.904301] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:52.904351] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.477 [2024-10-09 01:47:52.904366] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:52.904435] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.477 [2024-10-09 01:47:52.904452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:52.904509] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:10:23.477 [2024-10-09 01:47:52.904525] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:23.477 #35 NEW cov: 12384 ft: 15835 corp: 29/1580b lim: 90 exec/s: 35 rss: 75Mb L: 75/75 MS: 1 CopyPart- 00:10:23.477 [2024-10-09 01:47:52.944074] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.477 [2024-10-09 01:47:52.944100] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:52.944140] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.477 [2024-10-09 01:47:52.944155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.477 #36 NEW cov: 12384 ft: 15844 corp: 30/1633b lim: 90 exec/s: 36 rss: 75Mb L: 53/75 MS: 1 CMP- DE: "\377\377\377\007"- 00:10:23.477 [2024-10-09 01:47:52.984343] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.477 [2024-10-09 01:47:52.984369] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:52.984414] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.477 [2024-10-09 01:47:52.984429] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:52.984483] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.477 [2024-10-09 01:47:52.984498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.477 #37 NEW cov: 12384 ft: 15908 corp: 31/1703b lim: 90 exec/s: 37 rss: 75Mb L: 70/75 MS: 1 PersAutoDict- DE: "\377\377\377\377\000\000\000\000"- 00:10:23.477 [2024-10-09 01:47:53.024478] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.477 [2024-10-09 01:47:53.024508] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:53.024545] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.477 [2024-10-09 01:47:53.024561] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:53.024617] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.477 [2024-10-09 01:47:53.024631] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.477 #38 NEW cov: 12384 ft: 15911 corp: 32/1757b lim: 90 exec/s: 38 rss: 75Mb L: 54/75 MS: 1 EraseBytes- 00:10:23.477 [2024-10-09 01:47:53.064600] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.477 [2024-10-09 01:47:53.064633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:53.064673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.477 [2024-10-09 01:47:53.064689] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.477 [2024-10-09 01:47:53.064744] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.477 [2024-10-09 01:47:53.064760] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.477 #39 NEW cov: 12384 ft: 15921 corp: 33/1813b lim: 90 exec/s: 39 rss: 75Mb L: 56/75 MS: 1 ChangeByte- 00:10:23.477 [2024-10-09 01:47:53.104413] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.477 [2024-10-09 01:47:53.104440] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.477 #40 NEW cov: 12384 ft: 15940 corp: 34/1841b lim: 90 exec/s: 40 rss: 75Mb L: 28/75 MS: 1 ChangeBinInt- 00:10:23.736 [2024-10-09 01:47:53.144867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.736 [2024-10-09 01:47:53.144895] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.736 [2024-10-09 01:47:53.144933] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.736 [2024-10-09 01:47:53.144949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.736 [2024-10-09 01:47:53.145007] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.736 [2024-10-09 01:47:53.145023] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.736 #41 NEW cov: 12384 ft: 15953 corp: 35/1898b lim: 90 exec/s: 41 rss: 75Mb L: 57/75 MS: 1 InsertByte- 00:10:23.736 [2024-10-09 01:47:53.205154] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:0 nsid:0 00:10:23.736 [2024-10-09 01:47:53.205181] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:23.736 [2024-10-09 01:47:53.205231] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:1 nsid:0 00:10:23.736 [2024-10-09 01:47:53.205247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:23.736 [2024-10-09 01:47:53.205315] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:2 nsid:0 00:10:23.736 [2024-10-09 01:47:53.205331] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:23.736 [2024-10-09 01:47:53.205387] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION ACQUIRE (11) sqid:1 cid:3 nsid:0 00:10:23.736 [2024-10-09 01:47:53.205402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:23.736 #42 NEW cov: 12384 ft: 15960 corp: 36/1981b lim: 90 exec/s: 21 rss: 76Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:10:23.736 #42 DONE cov: 12384 ft: 15960 corp: 36/1981b lim: 90 exec/s: 21 rss: 76Mb 00:10:23.736 ###### Recommended dictionary. ###### 00:10:23.736 "\377\377\377\377\000\000\000\000" # Uses: 4 00:10:23.736 "\377\377\377\007" # Uses: 0 00:10:23.736 ###### End of recommended dictionary. ###### 00:10:23.736 Done 42 runs in 2 second(s) 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_20.conf /var/tmp/suppress_nvmf_fuzz 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 21 1 0x1 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=21 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_21.conf 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 21 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4421 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4421"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:23.736 01:47:53 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4421' -c /tmp/fuzz_json_21.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 -Z 21 00:10:23.994 [2024-10-09 01:47:53.410394] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:23.994 [2024-10-09 01:47:53.410459] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047596 ] 00:10:23.994 [2024-10-09 01:47:53.607678] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.994 [2024-10-09 01:47:53.645878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.252 [2024-10-09 01:47:53.705138] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.252 [2024-10-09 01:47:53.721336] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4421 *** 00:10:24.252 INFO: Running with entropic power schedule (0xFF, 100). 00:10:24.252 INFO: Seed: 4193500426 00:10:24.252 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:24.252 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:24.252 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_21 00:10:24.252 INFO: A corpus is not provided, starting from an empty corpus 00:10:24.252 #2 INITED exec/s: 0 rss: 66Mb 00:10:24.252 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:24.252 This may also happen if the target rejected all inputs we tried so far 00:10:24.252 [2024-10-09 01:47:53.766594] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:24.252 [2024-10-09 01:47:53.766624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.510 NEW_FUNC[1/716]: 0x461128 in fuzz_nvm_reservation_release_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:623 00:10:24.510 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:24.510 #25 NEW cov: 12132 ft: 12118 corp: 2/13b lim: 50 exec/s: 0 rss: 73Mb L: 12/12 MS: 3 ChangeByte-ChangeBit-InsertRepeatedBytes- 00:10:24.510 [2024-10-09 01:47:54.107566] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:24.510 [2024-10-09 01:47:54.107608] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.510 #26 NEW cov: 12245 ft: 12770 corp: 3/25b lim: 50 exec/s: 0 rss: 74Mb L: 12/12 MS: 1 ChangeBit- 00:10:24.510 [2024-10-09 01:47:54.167648] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:24.510 [2024-10-09 01:47:54.167676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.768 #27 NEW cov: 12251 ft: 13060 corp: 4/38b lim: 50 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 CopyPart- 00:10:24.768 [2024-10-09 01:47:54.207687] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:24.768 [2024-10-09 01:47:54.207715] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.768 #28 NEW cov: 12336 ft: 13353 corp: 5/51b lim: 50 exec/s: 0 rss: 74Mb L: 13/13 MS: 1 ChangeByte- 00:10:24.768 [2024-10-09 01:47:54.268185] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:24.768 [2024-10-09 01:47:54.268211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.768 [2024-10-09 01:47:54.268259] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:24.768 [2024-10-09 01:47:54.268274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:24.768 [2024-10-09 01:47:54.268334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:24.768 [2024-10-09 01:47:54.268348] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:24.768 #29 NEW cov: 12336 ft: 14221 corp: 6/88b lim: 50 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 InsertRepeatedBytes- 00:10:24.768 [2024-10-09 01:47:54.307967] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:24.768 [2024-10-09 01:47:54.307994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.768 #30 NEW cov: 12336 ft: 14311 corp: 7/100b lim: 50 exec/s: 0 rss: 74Mb L: 12/37 MS: 1 ChangeByte- 00:10:24.768 [2024-10-09 01:47:54.348099] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:24.768 [2024-10-09 01:47:54.348126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:24.768 #31 NEW cov: 12336 ft: 14344 corp: 8/113b lim: 50 exec/s: 0 rss: 74Mb L: 13/37 MS: 1 InsertByte- 00:10:24.768 [2024-10-09 01:47:54.408289] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:24.768 [2024-10-09 01:47:54.408317] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.027 #32 NEW cov: 12336 ft: 14347 corp: 9/126b lim: 50 exec/s: 0 rss: 74Mb L: 13/37 MS: 1 CopyPart- 00:10:25.027 [2024-10-09 01:47:54.468592] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.027 [2024-10-09 01:47:54.468620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.027 [2024-10-09 01:47:54.468678] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.027 [2024-10-09 01:47:54.468693] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.027 #33 NEW cov: 12336 ft: 14675 corp: 10/150b lim: 50 exec/s: 0 rss: 74Mb L: 24/37 MS: 1 CrossOver- 00:10:25.027 [2024-10-09 01:47:54.508854] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.027 [2024-10-09 01:47:54.508881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.027 [2024-10-09 01:47:54.508929] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.027 [2024-10-09 01:47:54.508945] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.027 [2024-10-09 01:47:54.509003] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:25.027 [2024-10-09 01:47:54.509018] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.027 #34 NEW cov: 12336 ft: 14757 corp: 11/186b lim: 50 exec/s: 0 rss: 74Mb L: 36/37 MS: 1 CrossOver- 00:10:25.027 [2024-10-09 01:47:54.569026] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.027 [2024-10-09 01:47:54.569055] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.027 [2024-10-09 01:47:54.569096] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.027 [2024-10-09 01:47:54.569110] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.027 [2024-10-09 01:47:54.569167] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:25.027 [2024-10-09 01:47:54.569187] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.027 #35 NEW cov: 12336 ft: 14812 corp: 12/223b lim: 50 exec/s: 0 rss: 74Mb L: 37/37 MS: 1 ChangeBinInt- 00:10:25.027 [2024-10-09 01:47:54.628915] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.027 [2024-10-09 01:47:54.628942] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.027 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:25.027 #36 NEW cov: 12359 ft: 14860 corp: 13/235b lim: 50 exec/s: 0 rss: 74Mb L: 12/37 MS: 1 ChangeBit- 00:10:25.027 [2024-10-09 01:47:54.689081] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.027 [2024-10-09 01:47:54.689108] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.285 #37 NEW cov: 12359 ft: 14910 corp: 14/251b lim: 50 exec/s: 0 rss: 74Mb L: 16/37 MS: 1 CMP- DE: "\036\000\000\000"- 00:10:25.285 [2024-10-09 01:47:54.729153] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.285 [2024-10-09 01:47:54.729180] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.285 #38 NEW cov: 12359 ft: 14925 corp: 15/264b lim: 50 exec/s: 38 rss: 74Mb L: 13/37 MS: 1 ChangeBit- 00:10:25.285 [2024-10-09 01:47:54.789347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.285 [2024-10-09 01:47:54.789374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.285 #39 NEW cov: 12359 ft: 14939 corp: 16/277b lim: 50 exec/s: 39 rss: 74Mb L: 13/37 MS: 1 ChangeBit- 00:10:25.285 [2024-10-09 01:47:54.849667] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.285 [2024-10-09 01:47:54.849695] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.285 [2024-10-09 01:47:54.849750] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.285 [2024-10-09 01:47:54.849766] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.285 #40 NEW cov: 12359 ft: 14963 corp: 17/301b lim: 50 exec/s: 40 rss: 74Mb L: 24/37 MS: 1 PersAutoDict- DE: "\036\000\000\000"- 00:10:25.285 [2024-10-09 01:47:54.890068] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.285 [2024-10-09 01:47:54.890096] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.286 [2024-10-09 01:47:54.890143] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.286 [2024-10-09 01:47:54.890159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.286 [2024-10-09 01:47:54.890215] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:25.286 [2024-10-09 01:47:54.890231] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.286 [2024-10-09 01:47:54.890288] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:10:25.286 [2024-10-09 01:47:54.890303] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:25.286 #41 NEW cov: 12359 ft: 15309 corp: 18/348b lim: 50 exec/s: 41 rss: 74Mb L: 47/47 MS: 1 InsertRepeatedBytes- 00:10:25.286 [2024-10-09 01:47:54.930040] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.286 [2024-10-09 01:47:54.930074] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.286 [2024-10-09 01:47:54.930125] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.286 [2024-10-09 01:47:54.930142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.286 [2024-10-09 01:47:54.930198] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:25.286 [2024-10-09 01:47:54.930215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.544 #42 NEW cov: 12359 ft: 15325 corp: 19/384b lim: 50 exec/s: 42 rss: 74Mb L: 36/47 MS: 1 PersAutoDict- DE: "\036\000\000\000"- 00:10:25.544 [2024-10-09 01:47:54.990051] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.544 [2024-10-09 01:47:54.990078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.544 [2024-10-09 01:47:54.990134] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.544 [2024-10-09 01:47:54.990149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.544 #43 NEW cov: 12359 ft: 15344 corp: 20/412b lim: 50 exec/s: 43 rss: 75Mb L: 28/47 MS: 1 PersAutoDict- DE: "\036\000\000\000"- 00:10:25.544 [2024-10-09 01:47:55.050212] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.544 [2024-10-09 01:47:55.050239] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.544 [2024-10-09 01:47:55.050311] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.544 [2024-10-09 01:47:55.050326] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.544 #44 NEW cov: 12359 ft: 15355 corp: 21/440b lim: 50 exec/s: 44 rss: 75Mb L: 28/47 MS: 1 ChangeBit- 00:10:25.544 [2024-10-09 01:47:55.110219] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.544 [2024-10-09 01:47:55.110247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.544 #45 NEW cov: 12359 ft: 15436 corp: 22/454b lim: 50 exec/s: 45 rss: 75Mb L: 14/47 MS: 1 InsertByte- 00:10:25.544 [2024-10-09 01:47:55.170691] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.544 [2024-10-09 01:47:55.170717] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.544 [2024-10-09 01:47:55.170763] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.544 [2024-10-09 01:47:55.170779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.544 [2024-10-09 01:47:55.170839] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:25.544 [2024-10-09 01:47:55.170856] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:25.802 #46 NEW cov: 12359 ft: 15472 corp: 23/485b lim: 50 exec/s: 46 rss: 75Mb L: 31/47 MS: 1 CrossOver- 00:10:25.802 [2024-10-09 01:47:55.230568] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.802 [2024-10-09 01:47:55.230593] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.802 #47 NEW cov: 12359 ft: 15508 corp: 24/495b lim: 50 exec/s: 47 rss: 75Mb L: 10/47 MS: 1 EraseBytes- 00:10:25.802 [2024-10-09 01:47:55.270651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.802 [2024-10-09 01:47:55.270678] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.802 #48 NEW cov: 12359 ft: 15539 corp: 25/508b lim: 50 exec/s: 48 rss: 75Mb L: 13/47 MS: 1 ChangeBit- 00:10:25.802 [2024-10-09 01:47:55.310787] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.802 [2024-10-09 01:47:55.310819] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.802 #49 NEW cov: 12359 ft: 15570 corp: 26/520b lim: 50 exec/s: 49 rss: 75Mb L: 12/47 MS: 1 CopyPart- 00:10:25.802 [2024-10-09 01:47:55.350894] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.802 [2024-10-09 01:47:55.350921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.802 #50 NEW cov: 12359 ft: 15582 corp: 27/534b lim: 50 exec/s: 50 rss: 75Mb L: 14/47 MS: 1 ChangeBit- 00:10:25.802 [2024-10-09 01:47:55.411217] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:25.802 [2024-10-09 01:47:55.411243] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:25.802 [2024-10-09 01:47:55.411283] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:25.802 [2024-10-09 01:47:55.411300] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:25.802 #51 NEW cov: 12359 ft: 15617 corp: 28/556b lim: 50 exec/s: 51 rss: 75Mb L: 22/47 MS: 1 CopyPart- 00:10:26.061 [2024-10-09 01:47:55.471279] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:26.061 [2024-10-09 01:47:55.471306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.061 #52 NEW cov: 12359 ft: 15645 corp: 29/569b lim: 50 exec/s: 52 rss: 75Mb L: 13/47 MS: 1 ShuffleBytes- 00:10:26.061 [2024-10-09 01:47:55.511762] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:26.061 [2024-10-09 01:47:55.511789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.061 [2024-10-09 01:47:55.511847] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:26.061 [2024-10-09 01:47:55.511864] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:26.061 [2024-10-09 01:47:55.511919] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:26.061 [2024-10-09 01:47:55.511934] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:26.061 [2024-10-09 01:47:55.511989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:3 nsid:0 00:10:26.061 [2024-10-09 01:47:55.512003] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:26.061 #54 NEW cov: 12359 ft: 15653 corp: 30/615b lim: 50 exec/s: 54 rss: 75Mb L: 46/47 MS: 2 EraseBytes-InsertRepeatedBytes- 00:10:26.061 [2024-10-09 01:47:55.551468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:26.061 [2024-10-09 01:47:55.551495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.061 #55 NEW cov: 12359 ft: 15670 corp: 31/628b lim: 50 exec/s: 55 rss: 75Mb L: 13/47 MS: 1 ChangeBinInt- 00:10:26.061 [2024-10-09 01:47:55.591720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:26.061 [2024-10-09 01:47:55.591753] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.061 [2024-10-09 01:47:55.591809] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:26.061 [2024-10-09 01:47:55.591831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:26.061 #56 NEW cov: 12359 ft: 15752 corp: 32/653b lim: 50 exec/s: 56 rss: 75Mb L: 25/47 MS: 1 CrossOver- 00:10:26.061 [2024-10-09 01:47:55.651764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:26.061 [2024-10-09 01:47:55.651791] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.061 #57 NEW cov: 12359 ft: 15770 corp: 33/666b lim: 50 exec/s: 57 rss: 75Mb L: 13/47 MS: 1 ChangeBit- 00:10:26.061 [2024-10-09 01:47:55.692195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:26.061 [2024-10-09 01:47:55.692222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.061 [2024-10-09 01:47:55.692271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:26.061 [2024-10-09 01:47:55.692286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:26.061 [2024-10-09 01:47:55.692342] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:26.061 [2024-10-09 01:47:55.692357] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:26.320 #58 NEW cov: 12359 ft: 15782 corp: 34/697b lim: 50 exec/s: 58 rss: 75Mb L: 31/47 MS: 1 ChangeBinInt- 00:10:26.320 [2024-10-09 01:47:55.752361] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:0 nsid:0 00:10:26.320 [2024-10-09 01:47:55.752388] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.320 [2024-10-09 01:47:55.752426] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:1 nsid:0 00:10:26.320 [2024-10-09 01:47:55.752443] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:26.320 [2024-10-09 01:47:55.752498] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION RELEASE (15) sqid:1 cid:2 nsid:0 00:10:26.320 [2024-10-09 01:47:55.752513] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:26.320 #59 NEW cov: 12359 ft: 15793 corp: 35/728b lim: 50 exec/s: 29 rss: 75Mb L: 31/47 MS: 1 ChangeByte- 00:10:26.320 #59 DONE cov: 12359 ft: 15793 corp: 35/728b lim: 50 exec/s: 29 rss: 75Mb 00:10:26.320 ###### Recommended dictionary. ###### 00:10:26.320 "\036\000\000\000" # Uses: 3 00:10:26.320 ###### End of recommended dictionary. ###### 00:10:26.320 Done 59 runs in 2 second(s) 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_21.conf /var/tmp/suppress_nvmf_fuzz 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 22 1 0x1 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=22 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_22.conf 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 22 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4422 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4422"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:26.320 01:47:55 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4422' -c /tmp/fuzz_json_22.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 -Z 22 00:10:26.320 [2024-10-09 01:47:55.940057] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:26.320 [2024-10-09 01:47:55.940147] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4047871 ] 00:10:26.578 [2024-10-09 01:47:56.148041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.578 [2024-10-09 01:47:56.186383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.836 [2024-10-09 01:47:56.245364] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.836 [2024-10-09 01:47:56.261548] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4422 *** 00:10:26.836 INFO: Running with entropic power schedule (0xFF, 100). 00:10:26.836 INFO: Seed: 2438547130 00:10:26.836 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:26.836 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:26.836 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_22 00:10:26.836 INFO: A corpus is not provided, starting from an empty corpus 00:10:26.836 #2 INITED exec/s: 0 rss: 67Mb 00:10:26.836 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:26.836 This may also happen if the target rejected all inputs we tried so far 00:10:26.836 [2024-10-09 01:47:56.307038] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:26.836 [2024-10-09 01:47:56.307070] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:26.836 [2024-10-09 01:47:56.307139] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:26.836 [2024-10-09 01:47:56.307155] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.093 NEW_FUNC[1/716]: 0x4633f8 in fuzz_nvm_reservation_register_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:644 00:10:27.093 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:27.093 #4 NEW cov: 12158 ft: 12156 corp: 2/39b lim: 85 exec/s: 0 rss: 73Mb L: 38/38 MS: 2 ShuffleBytes-InsertRepeatedBytes- 00:10:27.093 [2024-10-09 01:47:56.648221] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.093 [2024-10-09 01:47:56.648257] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.093 [2024-10-09 01:47:56.648334] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.093 [2024-10-09 01:47:56.648350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.093 [2024-10-09 01:47:56.648406] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.093 [2024-10-09 01:47:56.648421] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.093 [2024-10-09 01:47:56.648476] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.093 [2024-10-09 01:47:56.648491] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.093 #9 NEW cov: 12271 ft: 13035 corp: 3/118b lim: 85 exec/s: 0 rss: 73Mb L: 79/79 MS: 5 CopyPart-ShuffleBytes-ShuffleBytes-ShuffleBytes-InsertRepeatedBytes- 00:10:27.093 [2024-10-09 01:47:56.687879] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.093 [2024-10-09 01:47:56.687908] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.093 [2024-10-09 01:47:56.687976] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.093 [2024-10-09 01:47:56.687992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.093 #10 NEW cov: 12277 ft: 13234 corp: 4/156b lim: 85 exec/s: 0 rss: 74Mb L: 38/79 MS: 1 CrossOver- 00:10:27.093 [2024-10-09 01:47:56.748367] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.093 [2024-10-09 01:47:56.748393] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.093 [2024-10-09 01:47:56.748436] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.093 [2024-10-09 01:47:56.748452] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.094 [2024-10-09 01:47:56.748504] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.094 [2024-10-09 01:47:56.748519] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.094 [2024-10-09 01:47:56.748573] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.094 [2024-10-09 01:47:56.748588] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.367 #12 NEW cov: 12362 ft: 13597 corp: 5/229b lim: 85 exec/s: 0 rss: 74Mb L: 73/79 MS: 2 ChangeByte-InsertRepeatedBytes- 00:10:27.367 [2024-10-09 01:47:56.788439] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.367 [2024-10-09 01:47:56.788467] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.788512] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.367 [2024-10-09 01:47:56.788527] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.788581] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.367 [2024-10-09 01:47:56.788595] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.788651] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.367 [2024-10-09 01:47:56.788669] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.367 #13 NEW cov: 12362 ft: 13744 corp: 6/302b lim: 85 exec/s: 0 rss: 74Mb L: 73/79 MS: 1 ChangeByte- 00:10:27.367 [2024-10-09 01:47:56.848611] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.367 [2024-10-09 01:47:56.848638] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.848685] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.367 [2024-10-09 01:47:56.848700] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.848752] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.367 [2024-10-09 01:47:56.848768] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.848824] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.367 [2024-10-09 01:47:56.848855] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.367 #14 NEW cov: 12362 ft: 13801 corp: 7/380b lim: 85 exec/s: 0 rss: 74Mb L: 78/79 MS: 1 InsertRepeatedBytes- 00:10:27.367 [2024-10-09 01:47:56.908816] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.367 [2024-10-09 01:47:56.908843] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.908927] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.367 [2024-10-09 01:47:56.908941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.909010] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.367 [2024-10-09 01:47:56.909025] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.367 [2024-10-09 01:47:56.909079] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.367 [2024-10-09 01:47:56.909095] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.367 #15 NEW cov: 12362 ft: 13869 corp: 8/463b lim: 85 exec/s: 0 rss: 74Mb L: 83/83 MS: 1 InsertRepeatedBytes- 00:10:27.368 [2024-10-09 01:47:56.968969] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.368 [2024-10-09 01:47:56.968997] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.368 [2024-10-09 01:47:56.969062] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.368 [2024-10-09 01:47:56.969078] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.368 [2024-10-09 01:47:56.969133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.368 [2024-10-09 01:47:56.969148] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.368 [2024-10-09 01:47:56.969201] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.368 [2024-10-09 01:47:56.969216] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.368 #16 NEW cov: 12362 ft: 13928 corp: 9/536b lim: 85 exec/s: 0 rss: 74Mb L: 73/83 MS: 1 CopyPart- 00:10:27.368 [2024-10-09 01:47:57.008790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.368 [2024-10-09 01:47:57.008821] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.368 [2024-10-09 01:47:57.008861] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.368 [2024-10-09 01:47:57.008876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.368 #21 NEW cov: 12362 ft: 14053 corp: 10/576b lim: 85 exec/s: 0 rss: 74Mb L: 40/83 MS: 5 InsertByte-InsertByte-ChangeBit-ShuffleBytes-InsertRepeatedBytes- 00:10:27.663 [2024-10-09 01:47:57.048913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.663 [2024-10-09 01:47:57.048941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.049002] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.663 [2024-10-09 01:47:57.049019] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.663 #23 NEW cov: 12362 ft: 14161 corp: 11/615b lim: 85 exec/s: 0 rss: 74Mb L: 39/83 MS: 2 ChangeBit-CrossOver- 00:10:27.663 [2024-10-09 01:47:57.089021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.663 [2024-10-09 01:47:57.089049] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.089102] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.663 [2024-10-09 01:47:57.089117] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.663 #24 NEW cov: 12362 ft: 14214 corp: 12/655b lim: 85 exec/s: 0 rss: 74Mb L: 40/83 MS: 1 ChangeBit- 00:10:27.663 [2024-10-09 01:47:57.149490] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.663 [2024-10-09 01:47:57.149517] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.149565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.663 [2024-10-09 01:47:57.149580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.149634] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.663 [2024-10-09 01:47:57.149648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.149702] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.663 [2024-10-09 01:47:57.149716] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.663 #25 NEW cov: 12362 ft: 14256 corp: 13/728b lim: 85 exec/s: 0 rss: 74Mb L: 73/83 MS: 1 ShuffleBytes- 00:10:27.663 [2024-10-09 01:47:57.189593] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.663 [2024-10-09 01:47:57.189620] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.189670] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.663 [2024-10-09 01:47:57.189685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.189753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.663 [2024-10-09 01:47:57.189772] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.189833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.663 [2024-10-09 01:47:57.189848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.663 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:27.663 #31 NEW cov: 12385 ft: 14298 corp: 14/806b lim: 85 exec/s: 0 rss: 74Mb L: 78/83 MS: 1 ChangeByte- 00:10:27.663 [2024-10-09 01:47:57.249831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.663 [2024-10-09 01:47:57.249858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.663 [2024-10-09 01:47:57.249906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.664 [2024-10-09 01:47:57.249922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.664 [2024-10-09 01:47:57.249978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.664 [2024-10-09 01:47:57.249993] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.664 [2024-10-09 01:47:57.250048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.664 [2024-10-09 01:47:57.250062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.664 #32 NEW cov: 12385 ft: 14305 corp: 15/879b lim: 85 exec/s: 0 rss: 74Mb L: 73/83 MS: 1 ChangeByte- 00:10:27.664 [2024-10-09 01:47:57.309972] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.664 [2024-10-09 01:47:57.309999] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.664 [2024-10-09 01:47:57.310047] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.664 [2024-10-09 01:47:57.310062] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.664 [2024-10-09 01:47:57.310116] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.664 [2024-10-09 01:47:57.310132] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.664 [2024-10-09 01:47:57.310186] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.664 [2024-10-09 01:47:57.310201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.946 #33 NEW cov: 12385 ft: 14319 corp: 16/962b lim: 85 exec/s: 33 rss: 74Mb L: 83/83 MS: 1 ChangeBit- 00:10:27.946 [2024-10-09 01:47:57.370108] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.947 [2024-10-09 01:47:57.370136] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.370184] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.947 [2024-10-09 01:47:57.370199] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.370252] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.947 [2024-10-09 01:47:57.370266] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.370323] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.947 [2024-10-09 01:47:57.370338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.947 #34 NEW cov: 12385 ft: 14345 corp: 17/1040b lim: 85 exec/s: 34 rss: 74Mb L: 78/83 MS: 1 ChangeBit- 00:10:27.947 [2024-10-09 01:47:57.410388] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.947 [2024-10-09 01:47:57.410415] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.410481] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.947 [2024-10-09 01:47:57.410497] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.410551] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.947 [2024-10-09 01:47:57.410566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.410622] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.947 [2024-10-09 01:47:57.410637] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.410693] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:4 nsid:0 00:10:27.947 [2024-10-09 01:47:57.410708] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:27.947 #35 NEW cov: 12385 ft: 14394 corp: 18/1125b lim: 85 exec/s: 35 rss: 74Mb L: 85/85 MS: 1 InsertRepeatedBytes- 00:10:27.947 [2024-10-09 01:47:57.450346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.947 [2024-10-09 01:47:57.450374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.450420] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.947 [2024-10-09 01:47:57.450436] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.450507] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.947 [2024-10-09 01:47:57.450522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.450578] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.947 [2024-10-09 01:47:57.450594] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.947 #36 NEW cov: 12385 ft: 14439 corp: 19/1199b lim: 85 exec/s: 36 rss: 74Mb L: 74/85 MS: 1 InsertByte- 00:10:27.947 [2024-10-09 01:47:57.490468] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.947 [2024-10-09 01:47:57.490495] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.490560] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.947 [2024-10-09 01:47:57.490577] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.490629] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.947 [2024-10-09 01:47:57.490650] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.490705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.947 [2024-10-09 01:47:57.490719] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.947 #37 NEW cov: 12385 ft: 14465 corp: 20/1282b lim: 85 exec/s: 37 rss: 74Mb L: 83/85 MS: 1 ShuffleBytes- 00:10:27.947 [2024-10-09 01:47:57.550649] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.947 [2024-10-09 01:47:57.550674] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.550720] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.947 [2024-10-09 01:47:57.550736] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.550790] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.947 [2024-10-09 01:47:57.550805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.550862] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.947 [2024-10-09 01:47:57.550876] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:27.947 #38 NEW cov: 12385 ft: 14538 corp: 21/1365b lim: 85 exec/s: 38 rss: 74Mb L: 83/85 MS: 1 CopyPart- 00:10:27.947 [2024-10-09 01:47:57.590746] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:27.947 [2024-10-09 01:47:57.590773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.590826] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:27.947 [2024-10-09 01:47:57.590842] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.590892] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:27.947 [2024-10-09 01:47:57.590906] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:27.947 [2024-10-09 01:47:57.590960] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:27.947 [2024-10-09 01:47:57.590975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.206 #39 NEW cov: 12385 ft: 14592 corp: 22/1443b lim: 85 exec/s: 39 rss: 74Mb L: 78/85 MS: 1 ChangeBinInt- 00:10:28.206 [2024-10-09 01:47:57.650948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.206 [2024-10-09 01:47:57.650975] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.651041] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.206 [2024-10-09 01:47:57.651057] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.651111] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.206 [2024-10-09 01:47:57.651126] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.651182] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.206 [2024-10-09 01:47:57.651201] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.206 #40 NEW cov: 12385 ft: 14604 corp: 23/1516b lim: 85 exec/s: 40 rss: 74Mb L: 73/85 MS: 1 ShuffleBytes- 00:10:28.206 [2024-10-09 01:47:57.691048] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.206 [2024-10-09 01:47:57.691073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.691136] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.206 [2024-10-09 01:47:57.691152] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.691206] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.206 [2024-10-09 01:47:57.691222] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.691277] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.206 [2024-10-09 01:47:57.691293] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.206 #41 NEW cov: 12385 ft: 14618 corp: 24/1594b lim: 85 exec/s: 41 rss: 75Mb L: 78/85 MS: 1 ChangeByte- 00:10:28.206 [2024-10-09 01:47:57.751205] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.206 [2024-10-09 01:47:57.751233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.751299] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.206 [2024-10-09 01:47:57.751315] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.751371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.206 [2024-10-09 01:47:57.751387] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.751443] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.206 [2024-10-09 01:47:57.751459] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.206 #42 NEW cov: 12385 ft: 14625 corp: 25/1672b lim: 85 exec/s: 42 rss: 75Mb L: 78/85 MS: 1 ChangeByte- 00:10:28.206 [2024-10-09 01:47:57.791745] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.206 [2024-10-09 01:47:57.791773] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.791834] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.206 [2024-10-09 01:47:57.791850] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.791906] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.206 [2024-10-09 01:47:57.791922] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.791979] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.206 [2024-10-09 01:47:57.791994] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.206 #43 NEW cov: 12394 ft: 14742 corp: 26/1753b lim: 85 exec/s: 43 rss: 75Mb L: 81/85 MS: 1 InsertRepeatedBytes- 00:10:28.206 [2024-10-09 01:47:57.831173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.206 [2024-10-09 01:47:57.831204] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.831246] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.206 [2024-10-09 01:47:57.831262] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.206 #44 NEW cov: 12394 ft: 14767 corp: 27/1792b lim: 85 exec/s: 44 rss: 75Mb L: 39/85 MS: 1 InsertByte- 00:10:28.206 [2024-10-09 01:47:57.871604] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.206 [2024-10-09 01:47:57.871633] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.871683] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.206 [2024-10-09 01:47:57.871699] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.871754] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.206 [2024-10-09 01:47:57.871770] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.206 [2024-10-09 01:47:57.871831] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.206 [2024-10-09 01:47:57.871847] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.464 #45 NEW cov: 12394 ft: 14778 corp: 28/1873b lim: 85 exec/s: 45 rss: 75Mb L: 81/85 MS: 1 CopyPart- 00:10:28.464 [2024-10-09 01:47:57.931415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.464 [2024-10-09 01:47:57.931441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.464 [2024-10-09 01:47:57.931495] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.464 [2024-10-09 01:47:57.931511] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.464 #46 NEW cov: 12394 ft: 14790 corp: 29/1911b lim: 85 exec/s: 46 rss: 75Mb L: 38/85 MS: 1 ChangeBinInt- 00:10:28.464 [2024-10-09 01:47:57.991871] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.464 [2024-10-09 01:47:57.991898] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.464 [2024-10-09 01:47:57.991948] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.464 [2024-10-09 01:47:57.991964] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.464 [2024-10-09 01:47:57.992020] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.464 [2024-10-09 01:47:57.992035] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.464 [2024-10-09 01:47:57.992089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.464 [2024-10-09 01:47:57.992104] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.464 #47 NEW cov: 12394 ft: 14791 corp: 30/1982b lim: 85 exec/s: 47 rss: 75Mb L: 71/85 MS: 1 CrossOver- 00:10:28.464 [2024-10-09 01:47:58.052095] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.465 [2024-10-09 01:47:58.052124] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.465 [2024-10-09 01:47:58.052191] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.465 [2024-10-09 01:47:58.052207] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.465 [2024-10-09 01:47:58.052262] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.465 [2024-10-09 01:47:58.052278] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.465 [2024-10-09 01:47:58.052333] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.465 [2024-10-09 01:47:58.052349] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.465 #48 NEW cov: 12394 ft: 14830 corp: 31/2056b lim: 85 exec/s: 48 rss: 75Mb L: 74/85 MS: 1 ChangeBinInt- 00:10:28.465 [2024-10-09 01:47:58.111895] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.465 [2024-10-09 01:47:58.111921] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.465 [2024-10-09 01:47:58.111961] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.465 [2024-10-09 01:47:58.111976] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.724 #49 NEW cov: 12394 ft: 14836 corp: 32/2096b lim: 85 exec/s: 49 rss: 75Mb L: 40/85 MS: 1 InsertByte- 00:10:28.724 [2024-10-09 01:47:58.172389] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.724 [2024-10-09 01:47:58.172416] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.172469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.724 [2024-10-09 01:47:58.172484] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.172535] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.724 [2024-10-09 01:47:58.172551] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.172603] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.724 [2024-10-09 01:47:58.172619] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.724 #50 NEW cov: 12394 ft: 14843 corp: 33/2169b lim: 85 exec/s: 50 rss: 75Mb L: 73/85 MS: 1 ShuffleBytes- 00:10:28.724 [2024-10-09 01:47:58.232517] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.724 [2024-10-09 01:47:58.232544] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.232595] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.724 [2024-10-09 01:47:58.232610] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.232661] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.724 [2024-10-09 01:47:58.232676] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.232732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.724 [2024-10-09 01:47:58.232748] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.724 #51 NEW cov: 12394 ft: 14853 corp: 34/2239b lim: 85 exec/s: 51 rss: 75Mb L: 70/85 MS: 1 EraseBytes- 00:10:28.724 [2024-10-09 01:47:58.272614] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:0 nsid:0 00:10:28.724 [2024-10-09 01:47:58.272640] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.272711] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:1 nsid:0 00:10:28.724 [2024-10-09 01:47:58.272726] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.272780] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:2 nsid:0 00:10:28.724 [2024-10-09 01:47:58.272795] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:28.724 [2024-10-09 01:47:58.272853] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REGISTER (0d) sqid:1 cid:3 nsid:0 00:10:28.724 [2024-10-09 01:47:58.272869] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:28.724 #56 NEW cov: 12394 ft: 14863 corp: 35/2311b lim: 85 exec/s: 28 rss: 75Mb L: 72/85 MS: 5 ChangeBit-ChangeBinInt-ChangeBinInt-CrossOver-CrossOver- 00:10:28.724 #56 DONE cov: 12394 ft: 14863 corp: 35/2311b lim: 85 exec/s: 28 rss: 75Mb 00:10:28.724 Done 56 runs in 2 second(s) 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_22.conf /var/tmp/suppress_nvmf_fuzz 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 23 1 0x1 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=23 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_23.conf 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 23 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4423 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4423"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:28.983 01:47:58 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4423' -c /tmp/fuzz_json_23.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 -Z 23 00:10:28.983 [2024-10-09 01:47:58.453390] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:28.983 [2024-10-09 01:47:58.453453] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4048160 ] 00:10:29.242 [2024-10-09 01:47:58.667106] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.242 [2024-10-09 01:47:58.706621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.242 [2024-10-09 01:47:58.765925] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.242 [2024-10-09 01:47:58.782125] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4423 *** 00:10:29.242 INFO: Running with entropic power schedule (0xFF, 100). 00:10:29.242 INFO: Seed: 664563201 00:10:29.242 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:29.242 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:29.242 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_23 00:10:29.242 INFO: A corpus is not provided, starting from an empty corpus 00:10:29.242 #2 INITED exec/s: 0 rss: 66Mb 00:10:29.242 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:29.242 This may also happen if the target rejected all inputs we tried so far 00:10:29.242 [2024-10-09 01:47:58.847533] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:29.242 [2024-10-09 01:47:58.847565] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.500 NEW_FUNC[1/714]: 0x466638 in fuzz_nvm_reservation_report_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:671 00:10:29.500 NEW_FUNC[2/714]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:29.500 #3 NEW cov: 12068 ft: 12066 corp: 2/10b lim: 25 exec/s: 0 rss: 73Mb L: 9/9 MS: 1 CMP- DE: "\207\374%;.$'\000"- 00:10:29.759 [2024-10-09 01:47:59.188719] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:29.759 [2024-10-09 01:47:59.188782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.759 [2024-10-09 01:47:59.188874] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:29.759 [2024-10-09 01:47:59.188904] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:29.759 [2024-10-09 01:47:59.188985] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:29.759 [2024-10-09 01:47:59.189013] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:29.759 NEW_FUNC[1/1]: 0xf67bc8 in spdk_get_ticks /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk/env.c:321 00:10:29.759 #4 NEW cov: 12203 ft: 13154 corp: 3/27b lim: 25 exec/s: 0 rss: 74Mb L: 17/17 MS: 1 PersAutoDict- DE: "\207\374%;.$'\000"- 00:10:29.759 [2024-10-09 01:47:59.258536] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:29.759 [2024-10-09 01:47:59.258566] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.759 [2024-10-09 01:47:59.258619] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:29.760 [2024-10-09 01:47:59.258634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:29.760 #7 NEW cov: 12209 ft: 13627 corp: 4/37b lim: 25 exec/s: 0 rss: 74Mb L: 10/17 MS: 3 ChangeByte-InsertByte-PersAutoDict- DE: "\207\374%;.$'\000"- 00:10:29.760 [2024-10-09 01:47:59.298978] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:29.760 [2024-10-09 01:47:59.299006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.760 [2024-10-09 01:47:59.299063] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:29.760 [2024-10-09 01:47:59.299077] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:29.760 [2024-10-09 01:47:59.299131] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:29.760 [2024-10-09 01:47:59.299147] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:29.760 [2024-10-09 01:47:59.299200] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:29.760 [2024-10-09 01:47:59.299215] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:29.760 [2024-10-09 01:47:59.299271] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:29.760 [2024-10-09 01:47:59.299286] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:29.760 #8 NEW cov: 12294 ft: 14385 corp: 5/62b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 CMP- DE: "\377\377\377\377\377\377\377\017"- 00:10:29.760 [2024-10-09 01:47:59.358700] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:29.760 [2024-10-09 01:47:59.358727] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.760 #9 NEW cov: 12294 ft: 14455 corp: 6/70b lim: 25 exec/s: 0 rss: 74Mb L: 8/25 MS: 1 EraseBytes- 00:10:29.760 [2024-10-09 01:47:59.419107] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:29.760 [2024-10-09 01:47:59.419134] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:29.760 [2024-10-09 01:47:59.419188] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:29.760 [2024-10-09 01:47:59.419203] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:29.760 [2024-10-09 01:47:59.419258] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:29.760 [2024-10-09 01:47:59.419273] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.018 #10 NEW cov: 12294 ft: 14546 corp: 7/86b lim: 25 exec/s: 0 rss: 74Mb L: 16/25 MS: 1 InsertRepeatedBytes- 00:10:30.018 [2024-10-09 01:47:59.459168] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.018 [2024-10-09 01:47:59.459194] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.018 [2024-10-09 01:47:59.459240] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.018 [2024-10-09 01:47:59.459254] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.018 [2024-10-09 01:47:59.459307] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.018 [2024-10-09 01:47:59.459338] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.018 #11 NEW cov: 12294 ft: 14595 corp: 8/104b lim: 25 exec/s: 0 rss: 74Mb L: 18/25 MS: 1 CopyPart- 00:10:30.018 [2024-10-09 01:47:59.519599] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.018 [2024-10-09 01:47:59.519630] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.018 [2024-10-09 01:47:59.519695] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.018 [2024-10-09 01:47:59.519710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.018 [2024-10-09 01:47:59.519764] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.018 [2024-10-09 01:47:59.519779] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.018 [2024-10-09 01:47:59.519841] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.018 [2024-10-09 01:47:59.519857] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.018 [2024-10-09 01:47:59.519913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:30.018 [2024-10-09 01:47:59.519927] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:30.018 #12 NEW cov: 12294 ft: 14740 corp: 9/129b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 ShuffleBytes- 00:10:30.018 [2024-10-09 01:47:59.579770] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.019 [2024-10-09 01:47:59.579796] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.019 [2024-10-09 01:47:59.579872] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.019 [2024-10-09 01:47:59.579899] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.019 [2024-10-09 01:47:59.579950] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.019 [2024-10-09 01:47:59.579966] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.019 [2024-10-09 01:47:59.580019] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.019 [2024-10-09 01:47:59.580033] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.019 [2024-10-09 01:47:59.580089] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:30.019 [2024-10-09 01:47:59.580103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:30.019 #13 NEW cov: 12294 ft: 14792 corp: 10/154b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 PersAutoDict- DE: "\377\377\377\377\377\377\377\017"- 00:10:30.019 [2024-10-09 01:47:59.619626] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.019 [2024-10-09 01:47:59.619652] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.019 [2024-10-09 01:47:59.619699] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.019 [2024-10-09 01:47:59.619714] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.019 [2024-10-09 01:47:59.619767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.019 [2024-10-09 01:47:59.619782] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.019 #14 NEW cov: 12294 ft: 14821 corp: 11/172b lim: 25 exec/s: 0 rss: 74Mb L: 18/25 MS: 1 ChangeBit- 00:10:30.019 [2024-10-09 01:47:59.679781] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.019 [2024-10-09 01:47:59.679811] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.019 [2024-10-09 01:47:59.679873] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.019 [2024-10-09 01:47:59.679889] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.019 [2024-10-09 01:47:59.679944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.019 [2024-10-09 01:47:59.679960] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.277 #15 NEW cov: 12294 ft: 14883 corp: 12/191b lim: 25 exec/s: 0 rss: 74Mb L: 19/25 MS: 1 InsertByte- 00:10:30.277 [2024-10-09 01:47:59.720133] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.277 [2024-10-09 01:47:59.720159] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.720232] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.277 [2024-10-09 01:47:59.720247] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.720300] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.277 [2024-10-09 01:47:59.720316] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.720371] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.277 [2024-10-09 01:47:59.720386] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.720441] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:30.277 [2024-10-09 01:47:59.720456] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:30.277 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:30.277 #16 NEW cov: 12317 ft: 14970 corp: 13/216b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 InsertRepeatedBytes- 00:10:30.277 [2024-10-09 01:47:59.790401] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.277 [2024-10-09 01:47:59.790441] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.790510] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.277 [2024-10-09 01:47:59.790532] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.790596] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.277 [2024-10-09 01:47:59.790616] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.790680] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.277 [2024-10-09 01:47:59.790701] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.790765] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:30.277 [2024-10-09 01:47:59.790785] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:30.277 #17 NEW cov: 12317 ft: 15017 corp: 14/241b lim: 25 exec/s: 0 rss: 74Mb L: 25/25 MS: 1 CopyPart- 00:10:30.277 [2024-10-09 01:47:59.829944] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.277 [2024-10-09 01:47:59.829974] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.277 #18 NEW cov: 12317 ft: 15071 corp: 15/249b lim: 25 exec/s: 18 rss: 74Mb L: 8/25 MS: 1 CopyPart- 00:10:30.277 [2024-10-09 01:47:59.890355] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.277 [2024-10-09 01:47:59.890381] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.890434] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.277 [2024-10-09 01:47:59.890451] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.890505] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.277 [2024-10-09 01:47:59.890520] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.277 #19 NEW cov: 12317 ft: 15126 corp: 16/267b lim: 25 exec/s: 19 rss: 74Mb L: 18/25 MS: 1 CopyPart- 00:10:30.277 [2024-10-09 01:47:59.930529] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.277 [2024-10-09 01:47:59.930557] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.277 [2024-10-09 01:47:59.930606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.277 [2024-10-09 01:47:59.930621] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.278 [2024-10-09 01:47:59.930673] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.278 [2024-10-09 01:47:59.930687] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.278 [2024-10-09 01:47:59.930741] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.278 [2024-10-09 01:47:59.930757] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.537 #20 NEW cov: 12317 ft: 15155 corp: 17/287b lim: 25 exec/s: 20 rss: 74Mb L: 20/25 MS: 1 InsertRepeatedBytes- 00:10:30.537 [2024-10-09 01:47:59.990736] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.537 [2024-10-09 01:47:59.990763] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:47:59.990811] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.537 [2024-10-09 01:47:59.990831] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:47:59.990884] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.537 [2024-10-09 01:47:59.990915] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:47:59.990970] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.537 [2024-10-09 01:47:59.990984] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.537 #21 NEW cov: 12317 ft: 15170 corp: 18/308b lim: 25 exec/s: 21 rss: 74Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:10:30.537 [2024-10-09 01:48:00.030767] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.537 [2024-10-09 01:48:00.030804] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.030867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.537 [2024-10-09 01:48:00.030883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.030937] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.537 [2024-10-09 01:48:00.030952] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.537 #22 NEW cov: 12317 ft: 15248 corp: 19/326b lim: 25 exec/s: 22 rss: 74Mb L: 18/25 MS: 1 ChangeBinInt- 00:10:30.537 [2024-10-09 01:48:00.091202] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.537 [2024-10-09 01:48:00.091233] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.091276] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.537 [2024-10-09 01:48:00.091292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.091347] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.537 [2024-10-09 01:48:00.091362] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.091415] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.537 [2024-10-09 01:48:00.091430] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.091482] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:30.537 [2024-10-09 01:48:00.091498] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:30.537 #23 NEW cov: 12317 ft: 15273 corp: 20/351b lim: 25 exec/s: 23 rss: 74Mb L: 25/25 MS: 1 ChangeBinInt- 00:10:30.537 [2024-10-09 01:48:00.151350] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.537 [2024-10-09 01:48:00.151382] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.151422] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.537 [2024-10-09 01:48:00.151439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.151494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.537 [2024-10-09 01:48:00.151510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.151565] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.537 [2024-10-09 01:48:00.151581] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.537 [2024-10-09 01:48:00.151635] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:30.537 [2024-10-09 01:48:00.151649] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:30.537 #24 NEW cov: 12317 ft: 15288 corp: 21/376b lim: 25 exec/s: 24 rss: 74Mb L: 25/25 MS: 1 ChangeByte- 00:10:30.796 [2024-10-09 01:48:00.211268] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.796 [2024-10-09 01:48:00.211297] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.211336] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.796 [2024-10-09 01:48:00.211350] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.211403] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.796 [2024-10-09 01:48:00.211418] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.796 #25 NEW cov: 12317 ft: 15338 corp: 22/395b lim: 25 exec/s: 25 rss: 74Mb L: 19/25 MS: 1 CrossOver- 00:10:30.796 [2024-10-09 01:48:00.251577] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.796 [2024-10-09 01:48:00.251606] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.251654] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.796 [2024-10-09 01:48:00.251670] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.251723] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.796 [2024-10-09 01:48:00.251740] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.251792] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.796 [2024-10-09 01:48:00.251808] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.251867] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:30.796 [2024-10-09 01:48:00.251883] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:30.796 #26 NEW cov: 12317 ft: 15401 corp: 23/420b lim: 25 exec/s: 26 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:10:30.796 [2024-10-09 01:48:00.311657] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.796 [2024-10-09 01:48:00.311685] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.311732] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.796 [2024-10-09 01:48:00.311747] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.311800] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.796 [2024-10-09 01:48:00.311820] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.311876] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.796 [2024-10-09 01:48:00.311892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.796 #27 NEW cov: 12317 ft: 15410 corp: 24/440b lim: 25 exec/s: 27 rss: 75Mb L: 20/25 MS: 1 InsertRepeatedBytes- 00:10:30.796 [2024-10-09 01:48:00.351913] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.796 [2024-10-09 01:48:00.351941] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.351989] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.796 [2024-10-09 01:48:00.352004] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.352058] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.796 [2024-10-09 01:48:00.352073] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.352126] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:30.796 [2024-10-09 01:48:00.352142] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.352195] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:30.796 [2024-10-09 01:48:00.352211] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:30.796 #28 NEW cov: 12317 ft: 15416 corp: 25/465b lim: 25 exec/s: 28 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:10:30.796 [2024-10-09 01:48:00.391705] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.796 [2024-10-09 01:48:00.391733] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.391773] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:30.796 [2024-10-09 01:48:00.391789] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:30.796 [2024-10-09 01:48:00.391845] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:30.796 [2024-10-09 01:48:00.391861] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:30.796 #29 NEW cov: 12317 ft: 15465 corp: 26/483b lim: 25 exec/s: 29 rss: 75Mb L: 18/25 MS: 1 ChangeByte- 00:10:30.796 [2024-10-09 01:48:00.431606] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:30.796 [2024-10-09 01:48:00.431634] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:30.796 #30 NEW cov: 12317 ft: 15489 corp: 27/491b lim: 25 exec/s: 30 rss: 75Mb L: 8/25 MS: 1 ChangeBit- 00:10:31.055 [2024-10-09 01:48:00.472223] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:31.055 [2024-10-09 01:48:00.472251] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.055 [2024-10-09 01:48:00.472303] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:31.055 [2024-10-09 01:48:00.472319] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.055 [2024-10-09 01:48:00.472374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:31.055 [2024-10-09 01:48:00.472391] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.055 [2024-10-09 01:48:00.472442] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:31.055 [2024-10-09 01:48:00.472458] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:31.055 [2024-10-09 01:48:00.472513] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:31.055 [2024-10-09 01:48:00.472529] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:31.055 #31 NEW cov: 12317 ft: 15550 corp: 28/516b lim: 25 exec/s: 31 rss: 75Mb L: 25/25 MS: 1 CopyPart- 00:10:31.055 [2024-10-09 01:48:00.532346] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:31.055 [2024-10-09 01:48:00.532375] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.055 [2024-10-09 01:48:00.532424] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:31.055 [2024-10-09 01:48:00.532439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.532494] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:31.056 [2024-10-09 01:48:00.532510] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.532564] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:31.056 [2024-10-09 01:48:00.532580] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.532637] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:31.056 [2024-10-09 01:48:00.532653] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:31.056 #32 NEW cov: 12317 ft: 15557 corp: 29/541b lim: 25 exec/s: 32 rss: 75Mb L: 25/25 MS: 1 ChangeBit- 00:10:31.056 [2024-10-09 01:48:00.592374] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:31.056 [2024-10-09 01:48:00.592403] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.592469] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:31.056 [2024-10-09 01:48:00.592485] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.592539] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:31.056 [2024-10-09 01:48:00.592555] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.592608] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:31.056 [2024-10-09 01:48:00.592624] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:31.056 #33 NEW cov: 12317 ft: 15570 corp: 30/562b lim: 25 exec/s: 33 rss: 75Mb L: 21/25 MS: 1 InsertRepeatedBytes- 00:10:31.056 [2024-10-09 01:48:00.632609] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:31.056 [2024-10-09 01:48:00.632636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.632694] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:31.056 [2024-10-09 01:48:00.632710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.632761] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:31.056 [2024-10-09 01:48:00.632777] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.632833] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:31.056 [2024-10-09 01:48:00.632848] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.632904] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:31.056 [2024-10-09 01:48:00.632919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:31.056 #34 NEW cov: 12317 ft: 15582 corp: 31/587b lim: 25 exec/s: 34 rss: 75Mb L: 25/25 MS: 1 ChangeBinInt- 00:10:31.056 [2024-10-09 01:48:00.672554] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:31.056 [2024-10-09 01:48:00.672579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.672643] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:31.056 [2024-10-09 01:48:00.672658] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.672712] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:31.056 [2024-10-09 01:48:00.672728] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.056 #35 NEW cov: 12317 ft: 15611 corp: 32/603b lim: 25 exec/s: 35 rss: 75Mb L: 16/25 MS: 1 ChangeBit- 00:10:31.056 [2024-10-09 01:48:00.712652] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:31.056 [2024-10-09 01:48:00.712677] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.712753] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:31.056 [2024-10-09 01:48:00.712767] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.056 [2024-10-09 01:48:00.712821] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:31.056 [2024-10-09 01:48:00.712852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.315 #36 NEW cov: 12317 ft: 15622 corp: 33/620b lim: 25 exec/s: 36 rss: 75Mb L: 17/25 MS: 1 EraseBytes- 00:10:31.315 [2024-10-09 01:48:00.772930] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:31.315 [2024-10-09 01:48:00.772958] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.315 [2024-10-09 01:48:00.773021] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:31.315 [2024-10-09 01:48:00.773036] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.315 [2024-10-09 01:48:00.773088] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:31.315 [2024-10-09 01:48:00.773103] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.315 [2024-10-09 01:48:00.773157] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:31.315 [2024-10-09 01:48:00.773172] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:31.315 #37 NEW cov: 12317 ft: 15629 corp: 34/642b lim: 25 exec/s: 37 rss: 75Mb L: 22/25 MS: 1 InsertRepeatedBytes- 00:10:31.315 [2024-10-09 01:48:00.833173] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:0 nsid:0 00:10:31.315 [2024-10-09 01:48:00.833202] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.315 [2024-10-09 01:48:00.833256] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:1 nsid:0 00:10:31.315 [2024-10-09 01:48:00.833274] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.315 [2024-10-09 01:48:00.833326] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:2 nsid:0 00:10:31.316 [2024-10-09 01:48:00.833342] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:31.316 [2024-10-09 01:48:00.833396] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:3 nsid:0 00:10:31.316 [2024-10-09 01:48:00.833411] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:31.316 [2024-10-09 01:48:00.833465] nvme_qpair.c: 256:nvme_io_qpair_print_command: *NOTICE*: RESERVATION REPORT (0e) sqid:1 cid:4 nsid:0 00:10:31.316 [2024-10-09 01:48:00.833480] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:31.316 #38 NEW cov: 12317 ft: 15639 corp: 35/667b lim: 25 exec/s: 19 rss: 75Mb L: 25/25 MS: 1 ChangeASCIIInt- 00:10:31.316 #38 DONE cov: 12317 ft: 15639 corp: 35/667b lim: 25 exec/s: 19 rss: 75Mb 00:10:31.316 ###### Recommended dictionary. ###### 00:10:31.316 "\207\374%;.$'\000" # Uses: 2 00:10:31.316 "\377\377\377\377\377\377\377\017" # Uses: 1 00:10:31.316 ###### End of recommended dictionary. ###### 00:10:31.316 Done 38 runs in 2 second(s) 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_23.conf /var/tmp/suppress_nvmf_fuzz 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 24 1 0x1 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@23 -- # local fuzzer_type=24 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@24 -- # local timen=1 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@25 -- # local core=0x1 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@26 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@27 -- # local nvmf_cfg=/tmp/fuzz_json_24.conf 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@28 -- # local suppress_file=/var/tmp/suppress_nvmf_fuzz 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@32 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_nvmf_fuzz:print_suppressions=0 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # printf %02d 24 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@34 -- # port=4424 00:10:31.316 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@35 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:10:31.574 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@37 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' 00:10:31.574 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@38 -- # sed -e 's/"trsvcid": "4420"/"trsvcid": "4424"/' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/nvmf/fuzz_json.conf 00:10:31.574 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@41 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:31.574 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@42 -- # echo leak:nvmf_ctrlr_create 00:10:31.574 01:48:00 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@45 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz -m 0x1 -s 512 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:127.0.0.1 trsvcid:4424' -c /tmp/fuzz_json_24.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 -Z 24 00:10:31.574 [2024-10-09 01:48:01.018688] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:31.574 [2024-10-09 01:48:01.018755] [ DPDK EAL parameters: nvme_fuzz --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4048581 ] 00:10:31.574 [2024-10-09 01:48:01.218894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.833 [2024-10-09 01:48:01.260510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.833 [2024-10-09 01:48:01.319920] tcp.c: 754:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:31.833 [2024-10-09 01:48:01.336132] tcp.c:1098:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4424 *** 00:10:31.833 INFO: Running with entropic power schedule (0xFF, 100). 00:10:31.833 INFO: Seed: 3219568898 00:10:31.833 INFO: Loaded 1 modules (383850 inline 8-bit counters): 383850 [0x2be0a0c, 0x2c3e576), 00:10:31.833 INFO: Loaded 1 PC tables (383850 PCs): 383850 [0x2c3e578,0x3219c18), 00:10:31.833 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_nvmf_24 00:10:31.833 INFO: A corpus is not provided, starting from an empty corpus 00:10:31.833 #2 INITED exec/s: 0 rss: 67Mb 00:10:31.833 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:31.833 This may also happen if the target rejected all inputs we tried so far 00:10:31.833 [2024-10-09 01:48:01.381770] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:31.833 [2024-10-09 01:48:01.381805] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:31.833 [2024-10-09 01:48:01.381859] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:31.833 [2024-10-09 01:48:01.381877] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:31.833 [2024-10-09 01:48:01.381932] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:31.833 [2024-10-09 01:48:01.381948] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.092 NEW_FUNC[1/716]: 0x467728 in fuzz_nvm_compare_command /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:685 00:10:32.092 NEW_FUNC[2/716]: 0x4783a8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_nvme_fuzz/llvm_nvme_fuzz.c:780 00:10:32.092 #9 NEW cov: 12163 ft: 12161 corp: 2/80b lim: 100 exec/s: 0 rss: 74Mb L: 79/79 MS: 2 ChangeBit-InsertRepeatedBytes- 00:10:32.092 [2024-10-09 01:48:01.722720] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.092 [2024-10-09 01:48:01.722764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.092 [2024-10-09 01:48:01.722822] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.092 [2024-10-09 01:48:01.722854] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.092 [2024-10-09 01:48:01.722909] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.092 [2024-10-09 01:48:01.722924] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.092 [2024-10-09 01:48:01.722977] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.092 [2024-10-09 01:48:01.722992] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.092 #12 NEW cov: 12276 ft: 13180 corp: 3/179b lim: 100 exec/s: 0 rss: 74Mb L: 99/99 MS: 3 ChangeByte-ChangeByte-InsertRepeatedBytes- 00:10:32.350 [2024-10-09 01:48:01.762266] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.762296] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.350 #13 NEW cov: 12282 ft: 14263 corp: 4/210b lim: 100 exec/s: 0 rss: 74Mb L: 31/99 MS: 1 CrossOver- 00:10:32.350 [2024-10-09 01:48:01.822832] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14236095875335439813 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.822858] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.350 [2024-10-09 01:48:01.822895] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.822919] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.350 [2024-10-09 01:48:01.822938] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.822949] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.350 [2024-10-09 01:48:01.822966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.822977] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.350 #14 NEW cov: 12377 ft: 14564 corp: 5/290b lim: 100 exec/s: 0 rss: 74Mb L: 80/99 MS: 1 InsertByte- 00:10:32.350 [2024-10-09 01:48:01.882903] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.882930] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.350 [2024-10-09 01:48:01.882966] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.882981] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.350 [2024-10-09 01:48:01.883033] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.883048] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.350 #19 NEW cov: 12377 ft: 14725 corp: 6/365b lim: 100 exec/s: 0 rss: 74Mb L: 75/99 MS: 5 CopyPart-CopyPart-ShuffleBytes-EraseBytes-InsertRepeatedBytes- 00:10:32.350 [2024-10-09 01:48:01.922737] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.922764] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.350 #20 NEW cov: 12377 ft: 14781 corp: 7/396b lim: 100 exec/s: 0 rss: 74Mb L: 31/99 MS: 1 CopyPart- 00:10:32.350 [2024-10-09 01:48:01.982858] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.350 [2024-10-09 01:48:01.982892] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.609 #21 NEW cov: 12377 ft: 14836 corp: 8/427b lim: 100 exec/s: 0 rss: 75Mb L: 31/99 MS: 1 ChangeBit- 00:10:32.609 [2024-10-09 01:48:02.043056] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.043087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.609 #22 NEW cov: 12377 ft: 14890 corp: 9/458b lim: 100 exec/s: 0 rss: 75Mb L: 31/99 MS: 1 ChangeBit- 00:10:32.609 [2024-10-09 01:48:02.083139] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765810083442689927 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.083166] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.609 #23 NEW cov: 12377 ft: 14954 corp: 10/490b lim: 100 exec/s: 0 rss: 75Mb L: 32/99 MS: 1 InsertByte- 00:10:32.609 [2024-10-09 01:48:02.123695] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.123723] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.123769] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.123784] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.123838] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.123853] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.123906] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.123920] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.609 #24 NEW cov: 12377 ft: 15057 corp: 11/589b lim: 100 exec/s: 0 rss: 75Mb L: 99/99 MS: 1 ShuffleBytes- 00:10:32.609 [2024-10-09 01:48:02.163979] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.164007] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.164071] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744071696285695 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.164087] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.164138] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.164154] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.164204] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.164218] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.164270] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:4 nsid:0 lba:18446744073709551615 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.164285] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:1 00:10:32.609 #25 NEW cov: 12377 ft: 15122 corp: 12/689b lim: 100 exec/s: 0 rss: 75Mb L: 100/100 MS: 1 InsertRepeatedBytes- 00:10:32.609 [2024-10-09 01:48:02.223854] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.223881] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.223928] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.223943] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.609 [2024-10-09 01:48:02.223994] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.609 [2024-10-09 01:48:02.224010] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.609 NEW_FUNC[1/1]: 0x1bf30c8 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:32.609 #26 NEW cov: 12400 ft: 15199 corp: 13/756b lim: 100 exec/s: 0 rss: 75Mb L: 67/100 MS: 1 EraseBytes- 00:10:32.867 [2024-10-09 01:48:02.284198] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.284225] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.868 [2024-10-09 01:48:02.284290] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.284306] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.868 [2024-10-09 01:48:02.284357] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.284372] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.868 [2024-10-09 01:48:02.284422] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.284437] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:32.868 #27 NEW cov: 12400 ft: 15218 corp: 14/855b lim: 100 exec/s: 0 rss: 75Mb L: 99/100 MS: 1 CopyPart- 00:10:32.868 [2024-10-09 01:48:02.323969] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.323995] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.868 [2024-10-09 01:48:02.324048] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.324064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.868 #28 NEW cov: 12400 ft: 15512 corp: 15/913b lim: 100 exec/s: 0 rss: 75Mb L: 58/100 MS: 1 EraseBytes- 00:10:32.868 [2024-10-09 01:48:02.383987] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765810083442689927 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.384014] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.868 #29 NEW cov: 12400 ft: 15571 corp: 16/945b lim: 100 exec/s: 29 rss: 75Mb L: 32/100 MS: 1 ShuffleBytes- 00:10:32.868 [2024-10-09 01:48:02.444475] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.444501] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.868 [2024-10-09 01:48:02.444552] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9765923333140350855 len:136 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.444567] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:32.868 [2024-10-09 01:48:02.444621] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.444636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:32.868 #30 NEW cov: 12400 ft: 15587 corp: 17/1012b lim: 100 exec/s: 30 rss: 75Mb L: 67/100 MS: 1 ChangeBinInt- 00:10:32.868 [2024-10-09 01:48:02.504352] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923331035006855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:32.868 [2024-10-09 01:48:02.504380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:32.868 #33 NEW cov: 12400 ft: 15597 corp: 18/1045b lim: 100 exec/s: 33 rss: 75Mb L: 33/100 MS: 3 CopyPart-ShuffleBytes-CrossOver- 00:10:33.126 [2024-10-09 01:48:02.544713] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.126 [2024-10-09 01:48:02.544742] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.126 [2024-10-09 01:48:02.544779] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.126 [2024-10-09 01:48:02.544794] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.126 [2024-10-09 01:48:02.544850] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.544865] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.127 #34 NEW cov: 12400 ft: 15612 corp: 19/1120b lim: 100 exec/s: 34 rss: 75Mb L: 75/100 MS: 1 ChangeBit- 00:10:33.127 [2024-10-09 01:48:02.604608] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923331035006855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.604636] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.127 #35 NEW cov: 12400 ft: 15644 corp: 20/1153b lim: 100 exec/s: 35 rss: 75Mb L: 33/100 MS: 1 CopyPart- 00:10:33.127 [2024-10-09 01:48:02.665242] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.665270] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.127 [2024-10-09 01:48:02.665318] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.665334] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.127 [2024-10-09 01:48:02.665387] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.665402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.127 [2024-10-09 01:48:02.665457] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.665472] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.127 #36 NEW cov: 12400 ft: 15649 corp: 21/1240b lim: 100 exec/s: 36 rss: 75Mb L: 87/100 MS: 1 CopyPart- 00:10:33.127 [2024-10-09 01:48:02.705350] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:14236095875335439815 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.705378] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.127 [2024-10-09 01:48:02.705423] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.705439] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.127 [2024-10-09 01:48:02.705491] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.705507] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.127 [2024-10-09 01:48:02.705558] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:14251014049101104581 len:50630 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.705573] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.127 #37 NEW cov: 12400 ft: 15664 corp: 22/1320b lim: 100 exec/s: 37 rss: 75Mb L: 80/100 MS: 1 ChangeBit- 00:10:33.127 [2024-10-09 01:48:02.765121] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9764499465582380935 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.127 [2024-10-09 01:48:02.765149] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.389 #38 NEW cov: 12400 ft: 15674 corp: 23/1351b lim: 100 exec/s: 38 rss: 75Mb L: 31/100 MS: 1 ChangeBinInt- 00:10:33.389 [2024-10-09 01:48:02.825208] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:30841 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.389 [2024-10-09 01:48:02.825235] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.389 #39 NEW cov: 12400 ft: 15720 corp: 24/1382b lim: 100 exec/s: 39 rss: 75Mb L: 31/100 MS: 1 ChangeBinInt- 00:10:33.389 [2024-10-09 01:48:02.865347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9764499465582380935 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.389 [2024-10-09 01:48:02.865373] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.389 #40 NEW cov: 12400 ft: 15721 corp: 25/1413b lim: 100 exec/s: 40 rss: 75Mb L: 31/100 MS: 1 CMP- DE: "\000\000\000\000"- 00:10:33.389 [2024-10-09 01:48:02.925485] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765810083442689927 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.389 [2024-10-09 01:48:02.925512] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.389 #41 NEW cov: 12400 ft: 15729 corp: 26/1446b lim: 100 exec/s: 41 rss: 76Mb L: 33/100 MS: 1 InsertByte- 00:10:33.389 [2024-10-09 01:48:02.985681] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923331035006855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.389 [2024-10-09 01:48:02.985710] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.389 #42 NEW cov: 12400 ft: 15745 corp: 27/1479b lim: 100 exec/s: 42 rss: 76Mb L: 33/100 MS: 1 ChangeBit- 00:10:33.389 [2024-10-09 01:48:03.045833] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.389 [2024-10-09 01:48:03.045859] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.651 #43 NEW cov: 12400 ft: 15746 corp: 28/1510b lim: 100 exec/s: 43 rss: 76Mb L: 31/100 MS: 1 PersAutoDict- DE: "\000\000\000\000"- 00:10:33.651 [2024-10-09 01:48:03.086347] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.651 [2024-10-09 01:48:03.086374] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.651 [2024-10-09 01:48:03.086439] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:9736931410539153287 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.651 [2024-10-09 01:48:03.086455] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.651 [2024-10-09 01:48:03.086507] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9765923333143955335 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.651 [2024-10-09 01:48:03.086522] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.651 [2024-10-09 01:48:03.086576] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:3 nsid:0 lba:9765923333140350855 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.651 [2024-10-09 01:48:03.086591] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:3 cdw0:0 sqhd:0005 p:0 m:0 dnr:1 00:10:33.651 #44 NEW cov: 12400 ft: 15797 corp: 29/1609b lim: 100 exec/s: 44 rss: 76Mb L: 99/100 MS: 1 CrossOver- 00:10:33.651 [2024-10-09 01:48:03.126038] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9765923331035006855 len:16776 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.651 [2024-10-09 01:48:03.126064] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.651 #45 NEW cov: 12400 ft: 15853 corp: 30/1642b lim: 100 exec/s: 45 rss: 76Mb L: 33/100 MS: 1 ChangeByte- 00:10:33.651 [2024-10-09 01:48:03.166480] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.651 [2024-10-09 01:48:03.166509] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.651 [2024-10-09 01:48:03.166562] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.651 [2024-10-09 01:48:03.166579] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.651 [2024-10-09 01:48:03.166632] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.651 [2024-10-09 01:48:03.166648] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.651 #46 NEW cov: 12400 ft: 15860 corp: 31/1718b lim: 100 exec/s: 46 rss: 76Mb L: 76/100 MS: 1 InsertByte- 00:10:33.652 [2024-10-09 01:48:03.226375] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9764499465582380935 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.652 [2024-10-09 01:48:03.226402] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.652 [2024-10-09 01:48:03.266743] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:9764499465582380935 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.652 [2024-10-09 01:48:03.266769] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.652 [2024-10-09 01:48:03.266835] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:16493559404525924580 len:58597 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.652 [2024-10-09 01:48:03.266852] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.652 [2024-10-09 01:48:03.266918] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:16493347489100129508 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.652 [2024-10-09 01:48:03.266932] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.652 #48 NEW cov: 12400 ft: 15896 corp: 32/1780b lim: 100 exec/s: 48 rss: 76Mb L: 62/100 MS: 2 CMP-InsertRepeatedBytes- DE: "H\337L\2220$'\000"- 00:10:33.652 [2024-10-09 01:48:03.306847] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:168427520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.652 [2024-10-09 01:48:03.306874] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.652 [2024-10-09 01:48:03.306922] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.652 [2024-10-09 01:48:03.306937] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.652 [2024-10-09 01:48:03.306990] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.652 [2024-10-09 01:48:03.307006] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.910 #49 NEW cov: 12400 ft: 15906 corp: 33/1856b lim: 100 exec/s: 49 rss: 76Mb L: 76/100 MS: 1 InsertByte- 00:10:33.910 [2024-10-09 01:48:03.346964] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.910 [2024-10-09 01:48:03.346991] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:1 00:10:33.910 [2024-10-09 01:48:03.347050] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:0 lba:18446744073709551615 len:65536 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.910 [2024-10-09 01:48:03.347066] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:1 00:10:33.910 [2024-10-09 01:48:03.347118] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:2 nsid:0 lba:9765923335161448970 len:34696 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:10:33.910 [2024-10-09 01:48:03.347133] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID NAMESPACE OR FORMAT (00/0b) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:1 00:10:33.910 #50 NEW cov: 12400 ft: 15937 corp: 34/1931b lim: 100 exec/s: 25 rss: 76Mb L: 75/100 MS: 1 InsertRepeatedBytes- 00:10:33.910 #50 DONE cov: 12400 ft: 15937 corp: 34/1931b lim: 100 exec/s: 25 rss: 76Mb 00:10:33.910 ###### Recommended dictionary. ###### 00:10:33.910 "\000\000\000\000" # Uses: 1 00:10:33.910 "H\337L\2220$'\000" # Uses: 0 00:10:33.910 ###### End of recommended dictionary. ###### 00:10:33.910 Done 50 runs in 2 second(s) 00:10:33.910 01:48:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@54 -- # rm -rf /tmp/fuzz_json_24.conf /var/tmp/suppress_nvmf_fuzz 00:10:33.910 01:48:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:33.910 01:48:03 llvm_fuzz.nvmf_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:33.911 01:48:03 llvm_fuzz.nvmf_llvm_fuzz -- nvmf/run.sh@79 -- # trap - SIGINT SIGTERM EXIT 00:10:33.911 00:10:33.911 real 1m3.987s 00:10:33.911 user 1m39.908s 00:10:33.911 sys 0m7.604s 00:10:33.911 01:48:03 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:33.911 01:48:03 llvm_fuzz.nvmf_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:33.911 ************************************ 00:10:33.911 END TEST nvmf_llvm_fuzz 00:10:33.911 ************************************ 00:10:33.911 01:48:03 llvm_fuzz -- fuzz/llvm.sh@17 -- # for fuzzer in "${fuzzers[@]}" 00:10:33.911 01:48:03 llvm_fuzz -- fuzz/llvm.sh@18 -- # case "$fuzzer" in 00:10:33.911 01:48:03 llvm_fuzz -- fuzz/llvm.sh@20 -- # run_test vfio_llvm_fuzz /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:10:33.911 01:48:03 llvm_fuzz -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:33.911 01:48:03 llvm_fuzz -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:33.911 01:48:03 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:33.911 ************************************ 00:10:33.911 START TEST vfio_llvm_fuzz 00:10:33.911 ************************************ 00:10:33.911 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/run.sh 00:10:34.172 * Looking for test storage... 00:10:34.172 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:34.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.172 --rc genhtml_branch_coverage=1 00:10:34.172 --rc genhtml_function_coverage=1 00:10:34.172 --rc genhtml_legend=1 00:10:34.172 --rc geninfo_all_blocks=1 00:10:34.172 --rc geninfo_unexecuted_blocks=1 00:10:34.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.172 ' 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:34.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.172 --rc genhtml_branch_coverage=1 00:10:34.172 --rc genhtml_function_coverage=1 00:10:34.172 --rc genhtml_legend=1 00:10:34.172 --rc geninfo_all_blocks=1 00:10:34.172 --rc geninfo_unexecuted_blocks=1 00:10:34.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.172 ' 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:34.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.172 --rc genhtml_branch_coverage=1 00:10:34.172 --rc genhtml_function_coverage=1 00:10:34.172 --rc genhtml_legend=1 00:10:34.172 --rc geninfo_all_blocks=1 00:10:34.172 --rc geninfo_unexecuted_blocks=1 00:10:34.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.172 ' 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:34.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.172 --rc genhtml_branch_coverage=1 00:10:34.172 --rc genhtml_function_coverage=1 00:10:34.172 --rc genhtml_legend=1 00:10:34.172 --rc geninfo_all_blocks=1 00:10:34.172 --rc geninfo_unexecuted_blocks=1 00:10:34.172 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.172 ' 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@64 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/setup/common.sh 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- setup/common.sh@6 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/autotest_common.sh 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@34 -- # set -e 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output ']' 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/build_config.sh 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB=/usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:10:34.172 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@36 -- # CONFIG_FUZZER=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@71 -- # CONFIG_SHARED=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@75 -- # CONFIG_FC=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/build_config.sh@89 -- # CONFIG_URING=n 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common/applications.sh 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/common 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/config.h ]] 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:34.173 #define SPDK_CONFIG_H 00:10:34.173 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:34.173 #define SPDK_CONFIG_APPS 1 00:10:34.173 #define SPDK_CONFIG_ARCH native 00:10:34.173 #undef SPDK_CONFIG_ASAN 00:10:34.173 #undef SPDK_CONFIG_AVAHI 00:10:34.173 #undef SPDK_CONFIG_CET 00:10:34.173 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:34.173 #define SPDK_CONFIG_COVERAGE 1 00:10:34.173 #define SPDK_CONFIG_CROSS_PREFIX 00:10:34.173 #undef SPDK_CONFIG_CRYPTO 00:10:34.173 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:34.173 #undef SPDK_CONFIG_CUSTOMOCF 00:10:34.173 #undef SPDK_CONFIG_DAOS 00:10:34.173 #define SPDK_CONFIG_DAOS_DIR 00:10:34.173 #define SPDK_CONFIG_DEBUG 1 00:10:34.173 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:34.173 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build 00:10:34.173 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:34.173 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:34.173 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:34.173 #undef SPDK_CONFIG_DPDK_UADK 00:10:34.173 #define SPDK_CONFIG_ENV /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/env_dpdk 00:10:34.173 #define SPDK_CONFIG_EXAMPLES 1 00:10:34.173 #undef SPDK_CONFIG_FC 00:10:34.173 #define SPDK_CONFIG_FC_PATH 00:10:34.173 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:34.173 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:34.173 #define SPDK_CONFIG_FSDEV 1 00:10:34.173 #undef SPDK_CONFIG_FUSE 00:10:34.173 #define SPDK_CONFIG_FUZZER 1 00:10:34.173 #define SPDK_CONFIG_FUZZER_LIB /usr/lib/clang/17/lib/x86_64-redhat-linux-gnu/libclang_rt.fuzzer_no_main.a 00:10:34.173 #undef SPDK_CONFIG_GOLANG 00:10:34.173 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:34.173 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:34.173 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:34.173 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:34.173 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:34.173 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:34.173 #undef SPDK_CONFIG_HAVE_LZ4 00:10:34.173 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:34.173 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:34.173 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:34.173 #define SPDK_CONFIG_IDXD 1 00:10:34.173 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:34.173 #undef SPDK_CONFIG_IPSEC_MB 00:10:34.173 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:34.173 #define SPDK_CONFIG_ISAL 1 00:10:34.173 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:34.173 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:34.173 #define SPDK_CONFIG_LIBDIR 00:10:34.173 #undef SPDK_CONFIG_LTO 00:10:34.173 #define SPDK_CONFIG_MAX_LCORES 128 00:10:34.173 #define SPDK_CONFIG_NVME_CUSE 1 00:10:34.173 #undef SPDK_CONFIG_OCF 00:10:34.173 #define SPDK_CONFIG_OCF_PATH 00:10:34.173 #define SPDK_CONFIG_OPENSSL_PATH 00:10:34.173 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:34.173 #define SPDK_CONFIG_PGO_DIR 00:10:34.173 #undef SPDK_CONFIG_PGO_USE 00:10:34.173 #define SPDK_CONFIG_PREFIX /usr/local 00:10:34.173 #undef SPDK_CONFIG_RAID5F 00:10:34.173 #undef SPDK_CONFIG_RBD 00:10:34.173 #define SPDK_CONFIG_RDMA 1 00:10:34.173 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:34.173 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:34.173 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:34.173 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:34.173 #undef SPDK_CONFIG_SHARED 00:10:34.173 #undef SPDK_CONFIG_SMA 00:10:34.173 #define SPDK_CONFIG_TESTS 1 00:10:34.173 #undef SPDK_CONFIG_TSAN 00:10:34.173 #define SPDK_CONFIG_UBLK 1 00:10:34.173 #define SPDK_CONFIG_UBSAN 1 00:10:34.173 #undef SPDK_CONFIG_UNIT_TESTS 00:10:34.173 #undef SPDK_CONFIG_URING 00:10:34.173 #define SPDK_CONFIG_URING_PATH 00:10:34.173 #undef SPDK_CONFIG_URING_ZNS 00:10:34.173 #undef SPDK_CONFIG_USDT 00:10:34.173 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:34.173 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:34.173 #define SPDK_CONFIG_VFIO_USER 1 00:10:34.173 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:34.173 #define SPDK_CONFIG_VHOST 1 00:10:34.173 #define SPDK_CONFIG_VIRTIO 1 00:10:34.173 #undef SPDK_CONFIG_VTUNE 00:10:34.173 #define SPDK_CONFIG_VTUNE_DIR 00:10:34.173 #define SPDK_CONFIG_WERROR 1 00:10:34.173 #define SPDK_CONFIG_WPDK_DIR 00:10:34.173 #undef SPDK_CONFIG_XNVME 00:10:34.173 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.173 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@5 -- # export PATH 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # dirname /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/common 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # readlink -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@64 -- # TEST_TAG=N/A 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/.run_test_name 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # uname -s 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@68 -- # PM_OS=Linux 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[0]= 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power ]] 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@58 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@62 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@64 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@66 -- # : 1 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@68 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@70 -- # : 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@72 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@74 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@76 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@78 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@80 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@82 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@84 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@86 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@88 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@90 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@92 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@94 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@96 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@98 -- # : 1 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@100 -- # : 1 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@102 -- # : rdma 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@104 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@106 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@108 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@110 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@112 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@114 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@116 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@118 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@120 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@122 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@124 -- # : 1 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@126 -- # : 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@128 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@130 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@132 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@134 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@136 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@138 -- # : 0 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@140 -- # : 00:10:34.174 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@142 -- # : true 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@144 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@146 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@148 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@150 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@152 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@154 -- # : 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@156 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@158 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@160 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@162 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@164 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@166 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@169 -- # : 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@171 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@173 -- # : 0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@175 -- # : 1 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/python 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:34.175 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@204 -- # cat 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/bin 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/build/examples 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@267 -- # _LCOV= 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # [[ 1 -eq 1 ]] 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@268 -- # _LCOV=1 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@273 -- # lcov_opt='--gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # export valgrind= 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@277 -- # valgrind= 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # uname -s 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@287 -- # MAKE=make 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # [[ -z 4049028 ]] 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@329 -- # kill -0 4049028 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.xe30eV 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio /tmp/spdk.xe30eV/tests/vfio /tmp/spdk.xe30eV 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # df -T 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=722997248 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=4561432576 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=86308478976 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=94500294656 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=8191815680 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47246716928 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250145280 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=3428352 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=18894159872 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=18900062208 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=5902336 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=47249555456 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=47250149376 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=593920 00:10:34.435 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # avails["$mount"]=9450016768 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@373 -- # sizes["$mount"]=9450029056 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:34.436 * Looking for test storage... 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@383 -- # mount=/ 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@385 -- # target_space=86308478976 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@392 -- # new_size=10406408192 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:34.436 * Found test storage at /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@400 -- # return 0 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1673 -- # true 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@27 -- # exec 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@29 -- # exec 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@18 -- # set -x 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@345 -- # : 1 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # decimal 1 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=1 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 1 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # decimal 2 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@353 -- # local d=2 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.436 01:48:03 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@355 -- # echo 2 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- scripts/common.sh@368 -- # return 0 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:34.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.436 --rc genhtml_branch_coverage=1 00:10:34.436 --rc genhtml_function_coverage=1 00:10:34.436 --rc genhtml_legend=1 00:10:34.436 --rc geninfo_all_blocks=1 00:10:34.436 --rc geninfo_unexecuted_blocks=1 00:10:34.436 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.436 ' 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:34.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.436 --rc genhtml_branch_coverage=1 00:10:34.436 --rc genhtml_function_coverage=1 00:10:34.436 --rc genhtml_legend=1 00:10:34.436 --rc geninfo_all_blocks=1 00:10:34.436 --rc geninfo_unexecuted_blocks=1 00:10:34.436 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.436 ' 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:34.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.436 --rc genhtml_branch_coverage=1 00:10:34.436 --rc genhtml_function_coverage=1 00:10:34.436 --rc genhtml_legend=1 00:10:34.436 --rc geninfo_all_blocks=1 00:10:34.436 --rc geninfo_unexecuted_blocks=1 00:10:34.436 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.436 ' 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:34.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.436 --rc genhtml_branch_coverage=1 00:10:34.436 --rc genhtml_function_coverage=1 00:10:34.436 --rc genhtml_legend=1 00:10:34.436 --rc geninfo_all_blocks=1 00:10:34.436 --rc geninfo_unexecuted_blocks=1 00:10:34.436 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:10:34.436 ' 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@65 -- # source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/../common.sh 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@8 -- # pids=() 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@67 -- # fuzzfile=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # grep -c '\.fn =' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@68 -- # fuzz_num=7 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@69 -- # (( fuzz_num != 0 )) 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@71 -- # trap 'cleanup /tmp/vfio-user-* /var/tmp/suppress_vfio_fuzz; exit 1' SIGINT SIGTERM EXIT 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@74 -- # mem_size=0 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@75 -- # [[ 1 -eq 1 ]] 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@76 -- # start_llvm_fuzz_short 7 1 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@69 -- # local fuzz_num=7 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@70 -- # local time=1 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i = 0 )) 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 0 1 0x1 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=0 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-0 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-0/domain/1 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-0/domain/2 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-0/fuzz_vfio_json.conf 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-0 /tmp/vfio-user-0/domain/1 /tmp/vfio-user-0/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:10:34.436 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-0/domain/1%; 00:10:34.437 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-0/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:34.437 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:34.437 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:34.437 01:48:04 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-0/domain/1 -c /tmp/vfio-user-0/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 -Y /tmp/vfio-user-0/domain/2 -r /tmp/vfio-user-0/spdk0.sock -Z 0 00:10:34.437 [2024-10-09 01:48:04.059423] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:34.437 [2024-10-09 01:48:04.059500] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049195 ] 00:10:34.695 [2024-10-09 01:48:04.140924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.695 [2024-10-09 01:48:04.190302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.954 INFO: Running with entropic power schedule (0xFF, 100). 00:10:34.954 INFO: Seed: 1949596495 00:10:34.954 INFO: Loaded 1 modules (381086 inline 8-bit counters): 381086 [0x2ba220c, 0x2bff2aa), 00:10:34.954 INFO: Loaded 1 PC tables (381086 PCs): 381086 [0x2bff2b0,0x31cfc90), 00:10:34.954 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_0 00:10:34.954 INFO: A corpus is not provided, starting from an empty corpus 00:10:34.954 #2 INITED exec/s: 0 rss: 67Mb 00:10:34.954 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:34.954 This may also happen if the target rejected all inputs we tried so far 00:10:34.954 [2024-10-09 01:48:04.432506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: enabling controller 00:10:35.212 NEW_FUNC[1/670]: 0x43b5e8 in fuzz_vfio_user_region_rw /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:84 00:10:35.213 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:35.213 #15 NEW cov: 11097 ft: 11087 corp: 2/7b lim: 6 exec/s: 0 rss: 74Mb L: 6/6 MS: 3 ChangeBit-CopyPart-InsertRepeatedBytes- 00:10:35.471 NEW_FUNC[1/1]: 0x12c23e8 in spdk_nvmf_request_using_zcopy /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/include/spdk/nvmf_transport.h:559 00:10:35.471 #16 NEW cov: 11139 ft: 13568 corp: 3/13b lim: 6 exec/s: 0 rss: 75Mb L: 6/6 MS: 1 ChangeBit- 00:10:35.471 #17 NEW cov: 11142 ft: 13940 corp: 4/19b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:10:35.730 #23 NEW cov: 11142 ft: 14405 corp: 5/25b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 ChangeByte- 00:10:35.730 NEW_FUNC[1/1]: 0x1bbf518 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:35.730 #24 NEW cov: 11159 ft: 14937 corp: 6/31b lim: 6 exec/s: 0 rss: 76Mb L: 6/6 MS: 1 CMP- DE: "\001\000\000\001"- 00:10:35.987 #40 NEW cov: 11159 ft: 15154 corp: 7/37b lim: 6 exec/s: 40 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:10:35.987 #41 NEW cov: 11159 ft: 15170 corp: 8/43b lim: 6 exec/s: 41 rss: 76Mb L: 6/6 MS: 1 CopyPart- 00:10:36.246 #42 NEW cov: 11159 ft: 16054 corp: 9/49b lim: 6 exec/s: 42 rss: 76Mb L: 6/6 MS: 1 ShuffleBytes- 00:10:36.246 #43 NEW cov: 11159 ft: 16271 corp: 10/55b lim: 6 exec/s: 43 rss: 76Mb L: 6/6 MS: 1 ChangeBit- 00:10:36.503 #44 NEW cov: 11159 ft: 16675 corp: 11/61b lim: 6 exec/s: 44 rss: 77Mb L: 6/6 MS: 1 ChangeByte- 00:10:36.503 #45 NEW cov: 11159 ft: 16711 corp: 12/67b lim: 6 exec/s: 45 rss: 77Mb L: 6/6 MS: 1 PersAutoDict- DE: "\001\000\000\001"- 00:10:36.760 #46 NEW cov: 11159 ft: 16777 corp: 13/73b lim: 6 exec/s: 46 rss: 77Mb L: 6/6 MS: 1 CrossOver- 00:10:36.760 #47 NEW cov: 11166 ft: 16839 corp: 14/79b lim: 6 exec/s: 47 rss: 77Mb L: 6/6 MS: 1 ChangeBit- 00:10:36.760 #48 NEW cov: 11166 ft: 17397 corp: 15/85b lim: 6 exec/s: 24 rss: 77Mb L: 6/6 MS: 1 CopyPart- 00:10:36.760 #48 DONE cov: 11166 ft: 17397 corp: 15/85b lim: 6 exec/s: 24 rss: 77Mb 00:10:36.760 ###### Recommended dictionary. ###### 00:10:36.760 "\001\000\000\001" # Uses: 1 00:10:36.760 ###### End of recommended dictionary. ###### 00:10:36.760 Done 48 runs in 2 second(s) 00:10:36.760 [2024-10-09 01:48:06.425021] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-0/domain/2: disabling controller 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-0 /var/tmp/suppress_vfio_fuzz 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 1 1 0x1 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=1 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-1 00:10:37.018 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-1/domain/1 00:10:37.019 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-1/domain/2 00:10:37.019 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-1/fuzz_vfio_json.conf 00:10:37.019 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:37.019 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:37.019 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-1 /tmp/vfio-user-1/domain/1 /tmp/vfio-user-1/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:37.277 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-1/domain/1%; 00:10:37.277 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-1/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:37.277 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:37.277 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:37.277 01:48:06 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-1/domain/1 -c /tmp/vfio-user-1/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 -Y /tmp/vfio-user-1/domain/2 -r /tmp/vfio-user-1/spdk1.sock -Z 1 00:10:37.277 [2024-10-09 01:48:06.724309] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:37.277 [2024-10-09 01:48:06.724391] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4049893 ] 00:10:37.277 [2024-10-09 01:48:06.807646] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.277 [2024-10-09 01:48:06.855248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.536 INFO: Running with entropic power schedule (0xFF, 100). 00:10:37.536 INFO: Seed: 335642128 00:10:37.536 INFO: Loaded 1 modules (381086 inline 8-bit counters): 381086 [0x2ba220c, 0x2bff2aa), 00:10:37.536 INFO: Loaded 1 PC tables (381086 PCs): 381086 [0x2bff2b0,0x31cfc90), 00:10:37.536 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_1 00:10:37.536 INFO: A corpus is not provided, starting from an empty corpus 00:10:37.536 #2 INITED exec/s: 0 rss: 67Mb 00:10:37.536 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:37.536 This may also happen if the target rejected all inputs we tried so far 00:10:37.536 [2024-10-09 01:48:07.111637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: enabling controller 00:10:37.536 [2024-10-09 01:48:07.162853] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:37.536 [2024-10-09 01:48:07.162878] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:37.536 [2024-10-09 01:48:07.162910] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:38.054 NEW_FUNC[1/673]: 0x43bb88 in fuzz_vfio_user_version /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:71 00:10:38.054 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:38.054 #6 NEW cov: 11121 ft: 11071 corp: 2/5b lim: 4 exec/s: 0 rss: 74Mb L: 4/4 MS: 4 ChangeBit-InsertByte-CopyPart-InsertByte- 00:10:38.054 [2024-10-09 01:48:07.630168] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:38.055 [2024-10-09 01:48:07.630205] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:38.055 [2024-10-09 01:48:07.630224] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:38.315 #12 NEW cov: 11138 ft: 14566 corp: 3/9b lim: 4 exec/s: 0 rss: 75Mb L: 4/4 MS: 1 CopyPart- 00:10:38.315 [2024-10-09 01:48:07.832980] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:38.315 [2024-10-09 01:48:07.833010] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:38.315 [2024-10-09 01:48:07.833045] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:38.315 NEW_FUNC[1/1]: 0x1bbf518 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:38.315 #33 NEW cov: 11155 ft: 15455 corp: 4/13b lim: 4 exec/s: 0 rss: 76Mb L: 4/4 MS: 1 ChangeBit- 00:10:38.574 [2024-10-09 01:48:08.032687] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:38.574 [2024-10-09 01:48:08.032719] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:38.574 [2024-10-09 01:48:08.032752] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:38.574 #44 NEW cov: 11155 ft: 15877 corp: 5/17b lim: 4 exec/s: 44 rss: 77Mb L: 4/4 MS: 1 ChangeByte- 00:10:38.574 [2024-10-09 01:48:08.221723] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:38.574 [2024-10-09 01:48:08.221747] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:38.574 [2024-10-09 01:48:08.221764] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:38.832 #60 NEW cov: 11155 ft: 16738 corp: 6/21b lim: 4 exec/s: 60 rss: 77Mb L: 4/4 MS: 1 ChangeBinInt- 00:10:38.832 [2024-10-09 01:48:08.409113] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:38.832 [2024-10-09 01:48:08.409135] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:38.832 [2024-10-09 01:48:08.409168] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:39.090 #61 NEW cov: 11155 ft: 16961 corp: 7/25b lim: 4 exec/s: 61 rss: 77Mb L: 4/4 MS: 1 ShuffleBytes- 00:10:39.090 [2024-10-09 01:48:08.598699] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:39.090 [2024-10-09 01:48:08.598721] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:39.090 [2024-10-09 01:48:08.598754] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:39.090 #62 NEW cov: 11155 ft: 17285 corp: 8/29b lim: 4 exec/s: 62 rss: 77Mb L: 4/4 MS: 1 CopyPart- 00:10:39.349 [2024-10-09 01:48:08.789308] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:39.349 [2024-10-09 01:48:08.789330] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:39.349 [2024-10-09 01:48:08.789348] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:39.349 #63 NEW cov: 11162 ft: 17508 corp: 9/33b lim: 4 exec/s: 63 rss: 77Mb L: 4/4 MS: 1 CrossOver- 00:10:39.349 [2024-10-09 01:48:08.981176] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: bad command 1 00:10:39.349 [2024-10-09 01:48:08.981199] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-1/domain/1: msg0: cmd 1 failed: Invalid argument 00:10:39.349 [2024-10-09 01:48:08.981215] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 1 return failure 00:10:39.607 #64 pulse cov: 11162 ft: 17714 corp: 9/33b lim: 4 exec/s: 32 rss: 77Mb 00:10:39.607 #64 NEW cov: 11162 ft: 17714 corp: 10/37b lim: 4 exec/s: 32 rss: 77Mb L: 4/4 MS: 1 ChangeBit- 00:10:39.607 #64 DONE cov: 11162 ft: 17714 corp: 10/37b lim: 4 exec/s: 32 rss: 77Mb 00:10:39.607 Done 64 runs in 2 second(s) 00:10:39.607 [2024-10-09 01:48:09.119013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-1/domain/2: disabling controller 00:10:39.865 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-1 /var/tmp/suppress_vfio_fuzz 00:10:39.865 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:39.865 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 2 1 0x1 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=2 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-2 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-2/domain/1 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-2/domain/2 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-2/fuzz_vfio_json.conf 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-2 /tmp/vfio-user-2/domain/1 /tmp/vfio-user-2/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-2/domain/1%; 00:10:39.866 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-2/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:39.866 01:48:09 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-2/domain/1 -c /tmp/vfio-user-2/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 -Y /tmp/vfio-user-2/domain/2 -r /tmp/vfio-user-2/spdk2.sock -Z 2 00:10:39.866 [2024-10-09 01:48:09.387014] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:39.866 [2024-10-09 01:48:09.387087] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4050296 ] 00:10:39.866 [2024-10-09 01:48:09.465223] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.866 [2024-10-09 01:48:09.510121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.124 INFO: Running with entropic power schedule (0xFF, 100). 00:10:40.124 INFO: Seed: 2989632125 00:10:40.124 INFO: Loaded 1 modules (381086 inline 8-bit counters): 381086 [0x2ba220c, 0x2bff2aa), 00:10:40.124 INFO: Loaded 1 PC tables (381086 PCs): 381086 [0x2bff2b0,0x31cfc90), 00:10:40.124 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_2 00:10:40.124 INFO: A corpus is not provided, starting from an empty corpus 00:10:40.124 #2 INITED exec/s: 0 rss: 67Mb 00:10:40.124 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:40.124 This may also happen if the target rejected all inputs we tried so far 00:10:40.124 [2024-10-09 01:48:09.767603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: enabling controller 00:10:40.383 [2024-10-09 01:48:09.809181] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:40.642 NEW_FUNC[1/672]: 0x43c578 in fuzz_vfio_user_get_region_info /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:103 00:10:40.642 NEW_FUNC[2/672]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:40.642 #129 NEW cov: 11085 ft: 11050 corp: 2/9b lim: 8 exec/s: 0 rss: 74Mb L: 8/8 MS: 2 ChangeByte-InsertRepeatedBytes- 00:10:40.642 [2024-10-09 01:48:10.287451] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:40.901 #135 NEW cov: 11118 ft: 13632 corp: 3/17b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 CrossOver- 00:10:40.901 [2024-10-09 01:48:10.471120] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:41.160 NEW_FUNC[1/1]: 0x1bbf518 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:41.160 #136 NEW cov: 11135 ft: 14478 corp: 4/25b lim: 8 exec/s: 0 rss: 75Mb L: 8/8 MS: 1 ShuffleBytes- 00:10:41.160 [2024-10-09 01:48:10.654083] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:41.160 #137 NEW cov: 11138 ft: 14915 corp: 5/33b lim: 8 exec/s: 137 rss: 76Mb L: 8/8 MS: 1 ChangeBit- 00:10:41.419 [2024-10-09 01:48:10.838953] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:41.419 #138 NEW cov: 11138 ft: 15832 corp: 6/41b lim: 8 exec/s: 138 rss: 76Mb L: 8/8 MS: 1 ShuffleBytes- 00:10:41.419 [2024-10-09 01:48:11.022146] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:41.677 #139 NEW cov: 11138 ft: 15910 corp: 7/49b lim: 8 exec/s: 139 rss: 76Mb L: 8/8 MS: 1 ChangeByte- 00:10:41.677 [2024-10-09 01:48:11.201294] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:41.677 #147 NEW cov: 11138 ft: 16093 corp: 8/57b lim: 8 exec/s: 147 rss: 76Mb L: 8/8 MS: 3 EraseBytes-CrossOver-CrossOver- 00:10:41.936 [2024-10-09 01:48:11.380061] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:41.936 #148 NEW cov: 11138 ft: 16363 corp: 9/65b lim: 8 exec/s: 148 rss: 76Mb L: 8/8 MS: 1 CMP- DE: "\001\001"- 00:10:41.936 [2024-10-09 01:48:11.560328] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:42.195 #150 NEW cov: 11145 ft: 16423 corp: 10/73b lim: 8 exec/s: 150 rss: 76Mb L: 8/8 MS: 2 EraseBytes-CopyPart- 00:10:42.195 [2024-10-09 01:48:11.742320] vfio_user.c: 170:vfio_user_dev_send_request: *ERROR*: Oversized argument length, command 5 00:10:42.195 #156 NEW cov: 11145 ft: 16526 corp: 11/81b lim: 8 exec/s: 78 rss: 76Mb L: 8/8 MS: 1 CopyPart- 00:10:42.195 #156 DONE cov: 11145 ft: 16526 corp: 11/81b lim: 8 exec/s: 78 rss: 76Mb 00:10:42.195 ###### Recommended dictionary. ###### 00:10:42.195 "\001\001" # Uses: 0 00:10:42.195 ###### End of recommended dictionary. ###### 00:10:42.195 Done 156 runs in 2 second(s) 00:10:42.454 [2024-10-09 01:48:11.865998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-2/domain/2: disabling controller 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-2 /var/tmp/suppress_vfio_fuzz 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 3 1 0x1 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=3 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-3 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-3/domain/1 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-3/domain/2 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-3/fuzz_vfio_json.conf 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-3 /tmp/vfio-user-3/domain/1 /tmp/vfio-user-3/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-3/domain/1%; 00:10:42.454 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-3/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:42.454 01:48:12 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-3/domain/1 -c /tmp/vfio-user-3/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 -Y /tmp/vfio-user-3/domain/2 -r /tmp/vfio-user-3/spdk3.sock -Z 3 00:10:42.714 [2024-10-09 01:48:12.137275] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:42.714 [2024-10-09 01:48:12.137370] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4050693 ] 00:10:42.714 [2024-10-09 01:48:12.216570] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.714 [2024-10-09 01:48:12.263335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.973 INFO: Running with entropic power schedule (0xFF, 100). 00:10:42.973 INFO: Seed: 1449683376 00:10:42.973 INFO: Loaded 1 modules (381086 inline 8-bit counters): 381086 [0x2ba220c, 0x2bff2aa), 00:10:42.973 INFO: Loaded 1 PC tables (381086 PCs): 381086 [0x2bff2b0,0x31cfc90), 00:10:42.973 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_3 00:10:42.973 INFO: A corpus is not provided, starting from an empty corpus 00:10:42.973 #2 INITED exec/s: 0 rss: 67Mb 00:10:42.973 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:42.973 This may also happen if the target rejected all inputs we tried so far 00:10:42.973 [2024-10-09 01:48:12.520656] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: enabling controller 00:10:43.491 NEW_FUNC[1/670]: 0x43cc68 in fuzz_vfio_user_dma_map /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:124 00:10:43.491 NEW_FUNC[2/670]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:43.491 #99 NEW cov: 11091 ft: 11059 corp: 2/33b lim: 32 exec/s: 0 rss: 74Mb L: 32/32 MS: 2 InsertRepeatedBytes-InsertRepeatedBytes- 00:10:43.750 NEW_FUNC[1/2]: 0x133e938 in nvmf_bdev_ctrlr_write_cmd /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr_bdev.c:388 00:10:43.750 NEW_FUNC[2/2]: 0x1340458 in nvmf_bdev_ctrlr_get_rw_ext_params /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/ctrlr_bdev.c:266 00:10:43.750 #104 NEW cov: 11129 ft: 14463 corp: 3/65b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 5 CopyPart-InsertByte-CrossOver-InsertRepeatedBytes-InsertRepeatedBytes- 00:10:43.750 NEW_FUNC[1/1]: 0x1bbf518 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:43.750 #105 NEW cov: 11146 ft: 15254 corp: 4/97b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CMP- DE: "\005\000"- 00:10:44.008 #116 NEW cov: 11146 ft: 15556 corp: 5/129b lim: 32 exec/s: 116 rss: 76Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:44.267 #117 NEW cov: 11146 ft: 17117 corp: 6/161b lim: 32 exec/s: 117 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:10:44.267 #123 NEW cov: 11146 ft: 17296 corp: 7/193b lim: 32 exec/s: 123 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:10:44.526 #124 NEW cov: 11146 ft: 17469 corp: 8/225b lim: 32 exec/s: 124 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:10:44.785 #130 NEW cov: 11153 ft: 17573 corp: 9/257b lim: 32 exec/s: 130 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:10:45.044 #131 NEW cov: 11153 ft: 17645 corp: 10/289b lim: 32 exec/s: 131 rss: 77Mb L: 32/32 MS: 1 CrossOver- 00:10:45.044 #132 NEW cov: 11153 ft: 17958 corp: 11/321b lim: 32 exec/s: 66 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:45.044 #132 DONE cov: 11153 ft: 17958 corp: 11/321b lim: 32 exec/s: 66 rss: 77Mb 00:10:45.044 ###### Recommended dictionary. ###### 00:10:45.044 "\005\000" # Uses: 2 00:10:45.044 ###### End of recommended dictionary. ###### 00:10:45.044 Done 132 runs in 2 second(s) 00:10:45.044 [2024-10-09 01:48:14.672039] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-3/domain/2: disabling controller 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-3 /var/tmp/suppress_vfio_fuzz 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 4 1 0x1 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=4 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-4 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-4/domain/1 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-4/domain/2 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-4/fuzz_vfio_json.conf 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-4 /tmp/vfio-user-4/domain/1 /tmp/vfio-user-4/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-4/domain/1%; 00:10:45.304 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-4/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:45.304 01:48:14 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-4/domain/1 -c /tmp/vfio-user-4/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 -Y /tmp/vfio-user-4/domain/2 -r /tmp/vfio-user-4/spdk4.sock -Z 4 00:10:45.304 [2024-10-09 01:48:14.964629] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:45.304 [2024-10-09 01:48:14.964705] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051083 ] 00:10:45.563 [2024-10-09 01:48:15.043893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.563 [2024-10-09 01:48:15.093039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.823 INFO: Running with entropic power schedule (0xFF, 100). 00:10:45.823 INFO: Seed: 4270661202 00:10:45.823 INFO: Loaded 1 modules (381086 inline 8-bit counters): 381086 [0x2ba220c, 0x2bff2aa), 00:10:45.823 INFO: Loaded 1 PC tables (381086 PCs): 381086 [0x2bff2b0,0x31cfc90), 00:10:45.823 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_4 00:10:45.823 INFO: A corpus is not provided, starting from an empty corpus 00:10:45.823 #2 INITED exec/s: 0 rss: 68Mb 00:10:45.823 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:45.823 This may also happen if the target rejected all inputs we tried so far 00:10:45.823 [2024-10-09 01:48:15.343822] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: enabling controller 00:10:45.823 [2024-10-09 01:48:15.414057] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=300 offset=0x6f00 prot=0x3: Invalid argument 00:10:45.823 [2024-10-09 01:48:15.414084] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6f00 flags=0x3: Invalid argument 00:10:45.823 [2024-10-09 01:48:15.414095] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:45.823 [2024-10-09 01:48:15.414138] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:45.823 [2024-10-09 01:48:15.415046] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:45.823 [2024-10-09 01:48:15.415061] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:45.823 [2024-10-09 01:48:15.415077] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:46.341 NEW_FUNC[1/673]: 0x43d4e8 in fuzz_vfio_user_dma_unmap /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:144 00:10:46.341 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:46.341 #136 NEW cov: 11125 ft: 10699 corp: 2/33b lim: 32 exec/s: 0 rss: 75Mb L: 32/32 MS: 4 ShuffleBytes-ChangeBinInt-InsertRepeatedBytes-InsertByte- 00:10:46.341 [2024-10-09 01:48:15.911808] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0x6f00 prot=0x3: Invalid argument 00:10:46.341 [2024-10-09 01:48:15.911857] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6f00 flags=0x3: Invalid argument 00:10:46.341 [2024-10-09 01:48:15.911868] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:46.341 [2024-10-09 01:48:15.911902] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:46.341 [2024-10-09 01:48:15.912798] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:46.341 [2024-10-09 01:48:15.912824] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:46.341 [2024-10-09 01:48:15.912842] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:46.600 #142 NEW cov: 11139 ft: 13893 corp: 3/65b lim: 32 exec/s: 0 rss: 76Mb L: 32/32 MS: 1 CopyPart- 00:10:46.600 [2024-10-09 01:48:16.108451] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0x4000000006f00 prot=0x3: Invalid argument 00:10:46.600 [2024-10-09 01:48:16.108476] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x4000000006f00 flags=0x3: Invalid argument 00:10:46.600 [2024-10-09 01:48:16.108487] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:46.600 [2024-10-09 01:48:16.108519] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:46.600 [2024-10-09 01:48:16.109441] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:46.600 [2024-10-09 01:48:16.109461] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:46.600 [2024-10-09 01:48:16.109477] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:46.600 NEW_FUNC[1/1]: 0x1bbf518 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:46.600 #143 NEW cov: 11156 ft: 15471 corp: 4/97b lim: 32 exec/s: 0 rss: 77Mb L: 32/32 MS: 1 ChangeBit- 00:10:46.859 [2024-10-09 01:48:16.312319] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0x6f00 prot=0x3: Invalid argument 00:10:46.859 [2024-10-09 01:48:16.312341] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6f00 flags=0x3: Invalid argument 00:10:46.859 [2024-10-09 01:48:16.312352] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:46.859 [2024-10-09 01:48:16.312368] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:46.859 [2024-10-09 01:48:16.313332] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:46.859 [2024-10-09 01:48:16.313352] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:46.859 [2024-10-09 01:48:16.313368] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:46.859 #144 NEW cov: 11156 ft: 16330 corp: 5/129b lim: 32 exec/s: 144 rss: 77Mb L: 32/32 MS: 1 CopyPart- 00:10:46.859 [2024-10-09 01:48:16.508989] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0x6f00 prot=0x3: Invalid argument 00:10:46.859 [2024-10-09 01:48:16.509013] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6f00 flags=0x3: Invalid argument 00:10:46.859 [2024-10-09 01:48:16.509027] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:46.859 [2024-10-09 01:48:16.509044] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:46.859 [2024-10-09 01:48:16.510023] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:46.859 [2024-10-09 01:48:16.510042] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:46.859 [2024-10-09 01:48:16.510058] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:47.118 #145 NEW cov: 11156 ft: 16524 corp: 6/161b lim: 32 exec/s: 145 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:47.118 [2024-10-09 01:48:16.706901] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0x6f00 prot=0x3: Invalid argument 00:10:47.118 [2024-10-09 01:48:16.706925] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6f00 flags=0x3: Invalid argument 00:10:47.118 [2024-10-09 01:48:16.706935] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:47.118 [2024-10-09 01:48:16.706968] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:47.118 [2024-10-09 01:48:16.707927] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:47.118 [2024-10-09 01:48:16.707947] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:47.118 [2024-10-09 01:48:16.707963] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:47.377 #146 NEW cov: 11156 ft: 17362 corp: 7/193b lim: 32 exec/s: 146 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:10:47.377 [2024-10-09 01:48:16.922676] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: DMA region size 30399297484750848 > max 8796093022208 00:10:47.377 [2024-10-09 01:48:16.922700] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x6c000000000000) offset=0x6f00 flags=0x3: No space left on device 00:10:47.377 [2024-10-09 01:48:16.922712] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: No space left on device 00:10:47.377 [2024-10-09 01:48:16.922728] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:47.377 [2024-10-09 01:48:16.923692] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x6c000000000000) flags=0: No such file or directory 00:10:47.377 [2024-10-09 01:48:16.923711] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:47.377 [2024-10-09 01:48:16.923727] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:47.377 #157 NEW cov: 11156 ft: 17582 corp: 8/225b lim: 32 exec/s: 157 rss: 77Mb L: 32/32 MS: 1 ChangeByte- 00:10:47.636 [2024-10-09 01:48:17.120340] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), 0x800) fd=302 offset=0x4000000006f00 prot=0x3: Permission denied 00:10:47.636 [2024-10-09 01:48:17.120364] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0x800) offset=0x4000000006f00 flags=0x3: Permission denied 00:10:47.636 [2024-10-09 01:48:17.120374] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Permission denied 00:10:47.636 [2024-10-09 01:48:17.120390] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:47.636 [2024-10-09 01:48:17.121344] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0x800) flags=0: No such file or directory 00:10:47.636 [2024-10-09 01:48:17.121363] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:47.636 [2024-10-09 01:48:17.121379] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:47.636 #163 NEW cov: 11163 ft: 17882 corp: 9/257b lim: 32 exec/s: 163 rss: 77Mb L: 32/32 MS: 1 ChangeBinInt- 00:10:47.896 [2024-10-09 01:48:17.320734] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to memory map DMA region [(nil), (nil)) fd=302 offset=0x6f00 prot=0x3: Invalid argument 00:10:47.896 [2024-10-09 01:48:17.320758] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: failed to add DMA region [0, 0) offset=0x6f00 flags=0x3: Invalid argument 00:10:47.896 [2024-10-09 01:48:17.320769] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 2 failed: Invalid argument 00:10:47.896 [2024-10-09 01:48:17.320785] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 2 return failure 00:10:47.896 [2024-10-09 01:48:17.321759] vfio_user.c:3104:vfio_user_log: *WARNING*: /tmp/vfio-user-4/domain/1: failed to remove DMA region [0, 0) flags=0: No such file or directory 00:10:47.896 [2024-10-09 01:48:17.321778] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-4/domain/1: msg0: cmd 3 failed: No such file or directory 00:10:47.896 [2024-10-09 01:48:17.321794] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 3 return failure 00:10:47.896 #164 NEW cov: 11163 ft: 18167 corp: 10/289b lim: 32 exec/s: 82 rss: 77Mb L: 32/32 MS: 1 ShuffleBytes- 00:10:47.896 #164 DONE cov: 11163 ft: 18167 corp: 10/289b lim: 32 exec/s: 82 rss: 77Mb 00:10:47.896 Done 164 runs in 2 second(s) 00:10:47.896 [2024-10-09 01:48:17.464028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-4/domain/2: disabling controller 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-4 /var/tmp/suppress_vfio_fuzz 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 5 1 0x1 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=5 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-5 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-5/domain/1 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-5/domain/2 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-5/fuzz_vfio_json.conf 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-5 /tmp/vfio-user-5/domain/1 /tmp/vfio-user-5/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-5/domain/1%; 00:10:48.155 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-5/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:48.155 01:48:17 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-5/domain/1 -c /tmp/vfio-user-5/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 -Y /tmp/vfio-user-5/domain/2 -r /tmp/vfio-user-5/spdk5.sock -Z 5 00:10:48.156 [2024-10-09 01:48:17.751028] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:48.156 [2024-10-09 01:48:17.751113] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051447 ] 00:10:48.415 [2024-10-09 01:48:17.829739] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.415 [2024-10-09 01:48:17.877380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.415 INFO: Running with entropic power schedule (0xFF, 100). 00:10:48.415 INFO: Seed: 2767707042 00:10:48.673 INFO: Loaded 1 modules (381086 inline 8-bit counters): 381086 [0x2ba220c, 0x2bff2aa), 00:10:48.673 INFO: Loaded 1 PC tables (381086 PCs): 381086 [0x2bff2b0,0x31cfc90), 00:10:48.673 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_5 00:10:48.673 INFO: A corpus is not provided, starting from an empty corpus 00:10:48.673 #2 INITED exec/s: 0 rss: 67Mb 00:10:48.673 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:48.673 This may also happen if the target rejected all inputs we tried so far 00:10:48.673 [2024-10-09 01:48:18.135677] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: enabling controller 00:10:48.673 [2024-10-09 01:48:18.203844] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:48.673 [2024-10-09 01:48:18.203883] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:48.932 NEW_FUNC[1/673]: 0x43dee8 in fuzz_vfio_user_irq_set /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:171 00:10:48.932 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:48.932 #87 NEW cov: 11121 ft: 10633 corp: 2/14b lim: 13 exec/s: 0 rss: 74Mb L: 13/13 MS: 5 ChangeByte-CopyPart-ShuffleBytes-InsertRepeatedBytes-CrossOver- 00:10:49.191 [2024-10-09 01:48:18.657493] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:49.191 [2024-10-09 01:48:18.657541] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:49.191 #97 NEW cov: 11137 ft: 13979 corp: 3/27b lim: 13 exec/s: 0 rss: 75Mb L: 13/13 MS: 5 ChangeByte-InsertByte-ChangeBit-ChangeByte-InsertRepeatedBytes- 00:10:49.191 [2024-10-09 01:48:18.848603] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:49.191 [2024-10-09 01:48:18.848637] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:49.450 NEW_FUNC[1/1]: 0x1bbf518 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:49.450 #108 NEW cov: 11154 ft: 15469 corp: 4/40b lim: 13 exec/s: 0 rss: 76Mb L: 13/13 MS: 1 ShuffleBytes- 00:10:49.450 [2024-10-09 01:48:19.034853] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:49.450 [2024-10-09 01:48:19.034884] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:49.709 #109 NEW cov: 11154 ft: 15864 corp: 5/53b lim: 13 exec/s: 109 rss: 76Mb L: 13/13 MS: 1 ChangeBit- 00:10:49.709 [2024-10-09 01:48:19.220082] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:49.709 [2024-10-09 01:48:19.220111] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:49.709 #110 NEW cov: 11157 ft: 16197 corp: 6/66b lim: 13 exec/s: 110 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:10:49.968 [2024-10-09 01:48:19.407673] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:49.968 [2024-10-09 01:48:19.407704] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:49.968 #111 NEW cov: 11157 ft: 16848 corp: 7/79b lim: 13 exec/s: 111 rss: 77Mb L: 13/13 MS: 1 ShuffleBytes- 00:10:49.968 [2024-10-09 01:48:19.596499] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:49.968 [2024-10-09 01:48:19.596530] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:50.227 #112 NEW cov: 11157 ft: 16904 corp: 8/92b lim: 13 exec/s: 112 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:50.227 [2024-10-09 01:48:19.784890] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:50.227 [2024-10-09 01:48:19.784929] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:50.227 #113 NEW cov: 11164 ft: 17009 corp: 9/105b lim: 13 exec/s: 113 rss: 77Mb L: 13/13 MS: 1 ChangeByte- 00:10:50.542 [2024-10-09 01:48:19.959273] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:50.542 [2024-10-09 01:48:19.959306] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:50.542 #114 NEW cov: 11164 ft: 17333 corp: 10/118b lim: 13 exec/s: 114 rss: 77Mb L: 13/13 MS: 1 ChangeBit- 00:10:50.542 [2024-10-09 01:48:20.136282] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-5/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:50.542 [2024-10-09 01:48:20.136320] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:50.828 #115 NEW cov: 11164 ft: 17353 corp: 11/131b lim: 13 exec/s: 57 rss: 77Mb L: 13/13 MS: 1 ChangeBinInt- 00:10:50.828 #115 DONE cov: 11164 ft: 17353 corp: 11/131b lim: 13 exec/s: 57 rss: 77Mb 00:10:50.828 Done 115 runs in 2 second(s) 00:10:50.828 [2024-10-09 01:48:20.271015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-5/domain/2: disabling controller 00:10:51.087 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-5 /var/tmp/suppress_vfio_fuzz 00:10:51.087 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:51.087 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@73 -- # start_llvm_fuzz 6 1 0x1 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@22 -- # local fuzzer_type=6 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@23 -- # local timen=1 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@24 -- # local core=0x1 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@25 -- # local corpus_dir=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@26 -- # local fuzzer_dir=/tmp/vfio-user-6 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@27 -- # local vfiouser_dir=/tmp/vfio-user-6/domain/1 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@28 -- # local vfiouser_io_dir=/tmp/vfio-user-6/domain/2 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@29 -- # local vfiouser_cfg=/tmp/vfio-user-6/fuzz_vfio_json.conf 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@30 -- # local suppress_file=/var/tmp/suppress_vfio_fuzz 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@34 -- # local LSAN_OPTIONS=report_objects=1:suppressions=/var/tmp/suppress_vfio_fuzz:print_suppressions=0 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@36 -- # mkdir -p /tmp/vfio-user-6 /tmp/vfio-user-6/domain/1 /tmp/vfio-user-6/domain/2 /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@39 -- # sed -e 's%/tmp/vfio-user/domain/1%/tmp/vfio-user-6/domain/1%; 00:10:51.088 s%/tmp/vfio-user/domain/2%/tmp/vfio-user-6/domain/2%' /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/vfio/fuzz_vfio_json.conf 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@43 -- # echo leak:spdk_nvmf_qpair_disconnect 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@44 -- # echo leak:nvmf_ctrlr_create 00:10:51.088 01:48:20 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@47 -- # /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz -m 0x1 -s 0 -P /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/llvm/ -F /tmp/vfio-user-6/domain/1 -c /tmp/vfio-user-6/fuzz_vfio_json.conf -t 1 -D /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 -Y /tmp/vfio-user-6/domain/2 -r /tmp/vfio-user-6/spdk6.sock -Z 6 00:10:51.088 [2024-10-09 01:48:20.562298] Starting SPDK v25.01-pre git sha1 3164389d2 / DPDK 24.03.0 initialization... 00:10:51.088 [2024-10-09 01:48:20.562379] [ DPDK EAL parameters: vfio_fuzz --no-shconf -c 0x1 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4051804 ] 00:10:51.088 [2024-10-09 01:48:20.640760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.088 [2024-10-09 01:48:20.689510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.347 INFO: Running with entropic power schedule (0xFF, 100). 00:10:51.347 INFO: Seed: 1282748397 00:10:51.347 INFO: Loaded 1 modules (381086 inline 8-bit counters): 381086 [0x2ba220c, 0x2bff2aa), 00:10:51.347 INFO: Loaded 1 PC tables (381086 PCs): 381086 [0x2bff2b0,0x31cfc90), 00:10:51.347 INFO: 0 files found in /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../corpus/llvm_vfio_6 00:10:51.347 INFO: A corpus is not provided, starting from an empty corpus 00:10:51.347 #2 INITED exec/s: 0 rss: 67Mb 00:10:51.347 WARNING: no interesting inputs were found so far. Is the code instrumented for coverage? 00:10:51.347 This may also happen if the target rejected all inputs we tried so far 00:10:51.347 [2024-10-09 01:48:20.943610] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: enabling controller 00:10:51.347 [2024-10-09 01:48:20.996896] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:51.347 [2024-10-09 01:48:20.996929] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:51.865 NEW_FUNC[1/673]: 0x43ebd8 in fuzz_vfio_user_set_msix /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:190 00:10:51.865 NEW_FUNC[2/673]: 0x4410f8 in TestOneInput /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/app/fuzz/llvm_vfio_fuzz/llvm_vfio_fuzz.c:220 00:10:51.865 #12 NEW cov: 11115 ft: 11068 corp: 2/10b lim: 9 exec/s: 0 rss: 74Mb L: 9/9 MS: 5 ShuffleBytes-ChangeByte-InsertByte-ChangeBit-InsertRepeatedBytes- 00:10:51.865 [2024-10-09 01:48:21.468213] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:51.865 [2024-10-09 01:48:21.468260] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:52.124 #13 NEW cov: 11129 ft: 14585 corp: 3/19b lim: 9 exec/s: 0 rss: 75Mb L: 9/9 MS: 1 ChangeByte- 00:10:52.124 [2024-10-09 01:48:21.652664] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:52.124 [2024-10-09 01:48:21.652699] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:52.124 NEW_FUNC[1/1]: 0x1bbf518 in get_rusage /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/event/reactor.c:662 00:10:52.124 #24 NEW cov: 11146 ft: 15822 corp: 4/28b lim: 9 exec/s: 0 rss: 76Mb L: 9/9 MS: 1 ChangeBit- 00:10:52.383 [2024-10-09 01:48:21.837384] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:52.383 [2024-10-09 01:48:21.837415] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:52.383 #25 NEW cov: 11146 ft: 16269 corp: 5/37b lim: 9 exec/s: 25 rss: 76Mb L: 9/9 MS: 1 ChangeBinInt- 00:10:52.383 [2024-10-09 01:48:22.013914] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:52.383 [2024-10-09 01:48:22.013947] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:52.642 #36 NEW cov: 11146 ft: 16721 corp: 6/46b lim: 9 exec/s: 36 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:10:52.642 [2024-10-09 01:48:22.187682] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:52.642 [2024-10-09 01:48:22.187713] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:52.642 #37 NEW cov: 11146 ft: 16988 corp: 7/55b lim: 9 exec/s: 37 rss: 77Mb L: 9/9 MS: 1 CrossOver- 00:10:52.901 [2024-10-09 01:48:22.362949] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:52.901 [2024-10-09 01:48:22.362980] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:52.901 #38 NEW cov: 11149 ft: 17387 corp: 8/64b lim: 9 exec/s: 38 rss: 77Mb L: 9/9 MS: 1 ChangeBinInt- 00:10:52.901 [2024-10-09 01:48:22.539453] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:52.901 [2024-10-09 01:48:22.539483] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:53.160 #39 NEW cov: 11149 ft: 17432 corp: 9/73b lim: 9 exec/s: 39 rss: 77Mb L: 9/9 MS: 1 ShuffleBytes- 00:10:53.160 [2024-10-09 01:48:22.730655] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:53.160 [2024-10-09 01:48:22.730687] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:53.419 #40 NEW cov: 11156 ft: 17836 corp: 10/82b lim: 9 exec/s: 40 rss: 77Mb L: 9/9 MS: 1 ChangeBinInt- 00:10:53.419 [2024-10-09 01:48:22.905222] vfio_user.c:3106:vfio_user_log: *ERROR*: /tmp/vfio-user-6/domain/1: msg0: cmd 8 failed: Invalid argument 00:10:53.419 [2024-10-09 01:48:22.905254] vfio_user.c: 144:vfio_user_read: *ERROR*: Command 8 return failure 00:10:53.419 #46 NEW cov: 11156 ft: 18236 corp: 11/91b lim: 9 exec/s: 23 rss: 77Mb L: 9/9 MS: 1 ChangeBit- 00:10:53.419 #46 DONE cov: 11156 ft: 18236 corp: 11/91b lim: 9 exec/s: 23 rss: 77Mb 00:10:53.419 Done 46 runs in 2 second(s) 00:10:53.419 [2024-10-09 01:48:23.031019] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /tmp/vfio-user-6/domain/2: disabling controller 00:10:53.679 01:48:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@58 -- # rm -rf /tmp/vfio-user-6 /var/tmp/suppress_vfio_fuzz 00:10:53.679 01:48:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i++ )) 00:10:53.679 01:48:23 llvm_fuzz.vfio_llvm_fuzz -- ../common.sh@72 -- # (( i < fuzz_num )) 00:10:53.679 01:48:23 llvm_fuzz.vfio_llvm_fuzz -- vfio/run.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:10:53.679 00:10:53.679 real 0m19.718s 00:10:53.679 user 0m27.474s 00:10:53.679 sys 0m2.007s 00:10:53.679 01:48:23 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.679 01:48:23 llvm_fuzz.vfio_llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:53.679 ************************************ 00:10:53.679 END TEST vfio_llvm_fuzz 00:10:53.679 ************************************ 00:10:53.679 00:10:53.679 real 1m24.058s 00:10:53.679 user 2m7.554s 00:10:53.679 sys 0m9.822s 00:10:53.679 01:48:23 llvm_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:53.679 01:48:23 llvm_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:53.679 ************************************ 00:10:53.679 END TEST llvm_fuzz 00:10:53.679 ************************************ 00:10:53.937 01:48:23 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:10:53.937 01:48:23 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:10:53.937 01:48:23 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:10:53.937 01:48:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.937 01:48:23 -- common/autotest_common.sh@10 -- # set +x 00:10:53.937 01:48:23 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:10:53.937 01:48:23 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:10:53.937 01:48:23 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:10:53.937 01:48:23 -- common/autotest_common.sh@10 -- # set +x 00:10:58.126 INFO: APP EXITING 00:10:58.126 INFO: killing all VMs 00:10:58.126 INFO: killing vhost app 00:10:58.126 INFO: EXIT DONE 00:11:00.657 Waiting for block devices as requested 00:11:00.657 0000:1a:00.0 (8086 0a54): vfio-pci -> nvme 00:11:00.916 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:11:00.916 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:11:00.916 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:11:01.175 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:11:01.175 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:11:01.175 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:11:01.175 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:11:01.433 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:11:01.433 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:11:01.433 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:11:01.692 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:11:01.692 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:11:01.692 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:11:01.952 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:11:01.952 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:11:01.952 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:11:07.253 Cleaning 00:11:07.253 Removing: /dev/shm/spdk_tgt_trace.pid4029700 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4027222 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4028349 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4029700 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4030074 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4030849 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4030973 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4031740 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4031906 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4032253 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4032488 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4032726 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4032981 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4033219 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4033414 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4033558 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4033838 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4034423 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4036813 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4037114 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4037238 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4037343 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4037737 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4037740 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4038201 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4038304 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4038512 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4038540 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4038742 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4038792 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4039211 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4039405 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4039597 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4039833 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4040406 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4040699 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4040973 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4041325 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4041682 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4042037 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4042398 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4042754 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4043112 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4043473 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4043832 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4044185 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4044522 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4044790 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4045102 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4045456 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4045815 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4046171 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4046528 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4046881 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4047243 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4047596 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4047871 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4048160 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4048581 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4049195 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4049893 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4050296 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4050693 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4051083 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4051447 00:11:07.253 Removing: /var/run/dpdk/spdk_pid4051804 00:11:07.253 Clean 00:11:07.253 01:48:36 -- common/autotest_common.sh@1451 -- # return 0 00:11:07.253 01:48:36 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:11:07.253 01:48:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.253 01:48:36 -- common/autotest_common.sh@10 -- # set +x 00:11:07.253 01:48:36 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:11:07.253 01:48:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:07.253 01:48:36 -- common/autotest_common.sh@10 -- # set +x 00:11:07.253 01:48:36 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:11:07.253 01:48:36 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log ]] 00:11:07.253 01:48:36 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/udev.log 00:11:07.253 01:48:36 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:11:07.253 01:48:36 -- spdk/autotest.sh@394 -- # hostname 00:11:07.253 01:48:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk -t spdk-wfp-39 -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info 00:11:07.512 geninfo: WARNING: invalid characters removed from testname! 00:11:12.781 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvme/nvme_stubs.gcda 00:11:16.972 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/lib/nvmf/mdns_server.gcda 00:11:19.509 01:48:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:27.631 01:48:56 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:32.906 01:49:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:38.180 01:49:07 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:43.454 01:49:12 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:48.725 01:49:18 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/cov_total.info 00:11:54.157 01:49:23 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:11:54.157 01:49:23 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:11:54.157 01:49:23 -- common/autotest_common.sh@1681 -- $ lcov --version 00:11:54.157 01:49:23 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:11:54.157 01:49:23 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:11:54.157 01:49:23 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:11:54.157 01:49:23 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:11:54.157 01:49:23 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:11:54.157 01:49:23 -- scripts/common.sh@336 -- $ IFS=.-: 00:11:54.157 01:49:23 -- scripts/common.sh@336 -- $ read -ra ver1 00:11:54.157 01:49:23 -- scripts/common.sh@337 -- $ IFS=.-: 00:11:54.157 01:49:23 -- scripts/common.sh@337 -- $ read -ra ver2 00:11:54.157 01:49:23 -- scripts/common.sh@338 -- $ local 'op=<' 00:11:54.157 01:49:23 -- scripts/common.sh@340 -- $ ver1_l=2 00:11:54.157 01:49:23 -- scripts/common.sh@341 -- $ ver2_l=1 00:11:54.157 01:49:23 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:11:54.157 01:49:23 -- scripts/common.sh@344 -- $ case "$op" in 00:11:54.157 01:49:23 -- scripts/common.sh@345 -- $ : 1 00:11:54.157 01:49:23 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:11:54.157 01:49:23 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.157 01:49:23 -- scripts/common.sh@365 -- $ decimal 1 00:11:54.157 01:49:23 -- scripts/common.sh@353 -- $ local d=1 00:11:54.157 01:49:23 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:11:54.157 01:49:23 -- scripts/common.sh@355 -- $ echo 1 00:11:54.157 01:49:23 -- scripts/common.sh@365 -- $ ver1[v]=1 00:11:54.157 01:49:23 -- scripts/common.sh@366 -- $ decimal 2 00:11:54.157 01:49:23 -- scripts/common.sh@353 -- $ local d=2 00:11:54.157 01:49:23 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:11:54.157 01:49:23 -- scripts/common.sh@355 -- $ echo 2 00:11:54.157 01:49:23 -- scripts/common.sh@366 -- $ ver2[v]=2 00:11:54.157 01:49:23 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:11:54.157 01:49:23 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:11:54.157 01:49:23 -- scripts/common.sh@368 -- $ return 0 00:11:54.157 01:49:23 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.157 01:49:23 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:11:54.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.157 --rc genhtml_branch_coverage=1 00:11:54.157 --rc genhtml_function_coverage=1 00:11:54.157 --rc genhtml_legend=1 00:11:54.157 --rc geninfo_all_blocks=1 00:11:54.157 --rc geninfo_unexecuted_blocks=1 00:11:54.157 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:54.157 ' 00:11:54.157 01:49:23 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:11:54.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.157 --rc genhtml_branch_coverage=1 00:11:54.157 --rc genhtml_function_coverage=1 00:11:54.157 --rc genhtml_legend=1 00:11:54.157 --rc geninfo_all_blocks=1 00:11:54.157 --rc geninfo_unexecuted_blocks=1 00:11:54.157 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:54.157 ' 00:11:54.157 01:49:23 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:11:54.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.157 --rc genhtml_branch_coverage=1 00:11:54.157 --rc genhtml_function_coverage=1 00:11:54.157 --rc genhtml_legend=1 00:11:54.157 --rc geninfo_all_blocks=1 00:11:54.157 --rc geninfo_unexecuted_blocks=1 00:11:54.157 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:54.157 ' 00:11:54.157 01:49:23 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:11:54.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.157 --rc genhtml_branch_coverage=1 00:11:54.157 --rc genhtml_function_coverage=1 00:11:54.157 --rc genhtml_legend=1 00:11:54.157 --rc geninfo_all_blocks=1 00:11:54.157 --rc geninfo_unexecuted_blocks=1 00:11:54.157 --gcov-tool /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh 00:11:54.157 ' 00:11:54.157 01:49:23 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/common.sh 00:11:54.157 01:49:23 -- scripts/common.sh@15 -- $ shopt -s extglob 00:11:54.157 01:49:23 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:11:54.157 01:49:23 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.157 01:49:23 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.157 01:49:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.157 01:49:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.157 01:49:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.157 01:49:23 -- paths/export.sh@5 -- $ export PATH 00:11:54.157 01:49:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.157 01:49:23 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output 00:11:54.157 01:49:23 -- common/autobuild_common.sh@486 -- $ date +%s 00:11:54.157 01:49:23 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728431363.XXXXXX 00:11:54.157 01:49:23 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728431363.WdPgqK 00:11:54.157 01:49:23 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:11:54.157 01:49:23 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:11:54.157 01:49:23 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/' 00:11:54.157 01:49:23 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp' 00:11:54.157 01:49:23 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:11:54.157 01:49:23 -- common/autobuild_common.sh@502 -- $ get_config_params 00:11:54.157 01:49:23 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:11:54.157 01:49:23 -- common/autotest_common.sh@10 -- $ set +x 00:11:54.158 01:49:23 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:11:54.158 01:49:23 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:11:54.158 01:49:23 -- pm/common@17 -- $ local monitor 00:11:54.158 01:49:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:54.158 01:49:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:54.158 01:49:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:54.158 01:49:23 -- pm/common@21 -- $ date +%s 00:11:54.158 01:49:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:54.158 01:49:23 -- pm/common@21 -- $ date +%s 00:11:54.158 01:49:23 -- pm/common@21 -- $ date +%s 00:11:54.158 01:49:23 -- pm/common@25 -- $ sleep 1 00:11:54.158 01:49:23 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728431363 00:11:54.158 01:49:23 -- pm/common@21 -- $ date +%s 00:11:54.158 01:49:23 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728431363 00:11:54.158 01:49:23 -- pm/common@21 -- $ /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728431363 00:11:54.158 01:49:23 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1728431363 00:11:54.158 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728431363_collect-vmstat.pm.log 00:11:54.158 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728431363_collect-cpu-temp.pm.log 00:11:54.158 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728431363_collect-cpu-load.pm.log 00:11:54.158 Redirecting to /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1728431363_collect-bmc-pm.bmc.pm.log 00:11:55.533 01:49:24 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:11:55.533 01:49:24 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:11:55.533 01:49:24 -- spdk/autopackage.sh@14 -- $ timing_finish 00:11:55.533 01:49:24 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:11:55.533 01:49:24 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:11:55.534 01:49:24 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/timing.txt 00:11:55.534 01:49:24 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:11:55.534 01:49:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:11:55.534 01:49:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:11:55.534 01:49:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:55.534 01:49:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:11:55.534 01:49:24 -- pm/common@44 -- $ pid=4058543 00:11:55.534 01:49:24 -- pm/common@50 -- $ kill -TERM 4058543 00:11:55.534 01:49:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:55.534 01:49:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:11:55.534 01:49:24 -- pm/common@44 -- $ pid=4058545 00:11:55.534 01:49:24 -- pm/common@50 -- $ kill -TERM 4058545 00:11:55.534 01:49:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:55.534 01:49:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:11:55.534 01:49:24 -- pm/common@44 -- $ pid=4058547 00:11:55.534 01:49:24 -- pm/common@50 -- $ kill -TERM 4058547 00:11:55.534 01:49:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:55.534 01:49:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:11:55.534 01:49:24 -- pm/common@44 -- $ pid=4058573 00:11:55.534 01:49:24 -- pm/common@50 -- $ sudo -E kill -TERM 4058573 00:11:55.534 + [[ -n 3919973 ]] 00:11:55.534 + sudo kill 3919973 00:11:55.543 [Pipeline] } 00:11:55.558 [Pipeline] // stage 00:11:55.564 [Pipeline] } 00:11:55.580 [Pipeline] // timeout 00:11:55.586 [Pipeline] } 00:11:55.600 [Pipeline] // catchError 00:11:55.605 [Pipeline] } 00:11:55.621 [Pipeline] // wrap 00:11:55.627 [Pipeline] } 00:11:55.640 [Pipeline] // catchError 00:11:55.649 [Pipeline] stage 00:11:55.651 [Pipeline] { (Epilogue) 00:11:55.663 [Pipeline] catchError 00:11:55.665 [Pipeline] { 00:11:55.678 [Pipeline] echo 00:11:55.680 Cleanup processes 00:11:55.686 [Pipeline] sh 00:11:55.969 + sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:11:55.969 4058697 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/short-fuzz-phy-autotest/spdk/../output/power/sdr.cache 00:11:55.969 4058941 sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:11:55.982 [Pipeline] sh 00:11:56.264 ++ sudo pgrep -af /var/jenkins/workspace/short-fuzz-phy-autotest/spdk 00:11:56.264 ++ grep -v 'sudo pgrep' 00:11:56.264 ++ awk '{print $1}' 00:11:56.264 + sudo kill -9 4058697 00:11:56.276 [Pipeline] sh 00:11:56.558 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:12:08.775 [Pipeline] sh 00:12:09.058 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:12:09.058 Artifacts sizes are good 00:12:09.073 [Pipeline] archiveArtifacts 00:12:09.080 Archiving artifacts 00:12:09.209 [Pipeline] sh 00:12:09.491 + sudo chown -R sys_sgci: /var/jenkins/workspace/short-fuzz-phy-autotest 00:12:09.505 [Pipeline] cleanWs 00:12:09.515 [WS-CLEANUP] Deleting project workspace... 00:12:09.515 [WS-CLEANUP] Deferred wipeout is used... 00:12:09.521 [WS-CLEANUP] done 00:12:09.523 [Pipeline] } 00:12:09.541 [Pipeline] // catchError 00:12:09.554 [Pipeline] sh 00:12:09.852 + logger -p user.info -t JENKINS-CI 00:12:09.861 [Pipeline] } 00:12:09.875 [Pipeline] // stage 00:12:09.881 [Pipeline] } 00:12:09.895 [Pipeline] // node 00:12:09.900 [Pipeline] End of Pipeline 00:12:09.943 Finished: SUCCESS